modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 18:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 18:28:34
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
guinmoon/mpt-7b-storywriter-GGUF | guinmoon | 2023-10-18T08:06:39Z | 315 | 4 | null | [
"gguf",
"endpoints_compatible",
"region:us"
]
| null | 2023-10-16T15:18:31Z | [Original model](https://huggingface.co/mosaicml/mpt-7b-storywriter) |
pavani8/my-pet-dog | pavani8 | 2023-10-18T07:54:37Z | 8 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-10-18T07:49:06Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by pavani8 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
|
chaotec/yrdy | chaotec | 2023-10-18T07:50:02Z | 0 | 1 | adapter-transformers | [
"adapter-transformers",
"music",
"finance",
"text-classification",
"ab",
"aa",
"af",
"ay",
"dataset:lmsys/lmsys-chat-1m",
"dataset:vikp/textbook_quality_programming",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-10-18T07:45:01Z | ---
license: apache-2.0
datasets:
- lmsys/lmsys-chat-1m
- vikp/textbook_quality_programming
language:
- ab
- aa
- af
- ay
metrics:
- bertscore
library_name: adapter-transformers
pipeline_tag: text-classification
tags:
- music
- finance
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
miittnnss/idk | miittnnss | 2023-10-18T07:49:08Z | 30 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"autotrain",
"vision",
"image-classification",
"dataset:Carlangeloconcepcionrepoyo/autotrain-data-dambuhalang-pogi-scout",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-11-20T08:32:23Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- Carlangeloconcepcionrepoyo/autotrain-data-dambuhalang-pogi-scout
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 1.7850904815735922
library_name: transformers
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 2169069849
- CO2 Emissions (in grams): 1.7851
## Validation Metrics
- Loss: 0.026
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000 |
irfansk/my-pet-dog | irfansk | 2023-10-18T07:48:21Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-10-18T07:44:07Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by irfansk following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
sreejith8100/donut-base-sroie3 | sreejith8100 | 2023-10-18T07:37:26Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2023-10-18T07:26:31Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie3
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
hung200504/bert-5 | hung200504 | 2023-10-18T07:25:41Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:bionlp/bluebert_pubmed_uncased_L-24_H-1024_A-16",
"base_model:finetune:bionlp/bluebert_pubmed_uncased_L-24_H-1024_A-16",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-18T07:24:26Z | ---
license: cc0-1.0
base_model: bionlp/bluebert_pubmed_uncased_L-24_H-1024_A-16
tags:
- generated_from_trainer
model-index:
- name: bert-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-5
This model is a fine-tuned version of [bionlp/bluebert_pubmed_uncased_L-24_H-1024_A-16](https://huggingface.co/bionlp/bluebert_pubmed_uncased_L-24_H-1024_A-16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
joseluhf11/symptom_encoder_v8 | joseluhf11 | 2023-10-18T07:25:11Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-10-18T07:24:37Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 128 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 1968 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 5904,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 128, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
lljllll2219/uk-mt5-base-xlsum-4000 | lljllll2219 | 2023-10-18T07:18:30Z | 64 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:xlsum",
"base_model:kravchenko/uk-mt5-base",
"base_model:finetune:kravchenko/uk-mt5-base",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2023-10-17T23:24:54Z | ---
base_model: kravchenko/uk-mt5-base
tags:
- summarization
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: uk-mt5-base-xlsum-4000
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
config: ukrainian
split: validation
args: ukrainian
metrics:
- name: Rouge1
type: rouge
value: 4.2038
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uk-mt5-base-xlsum-4000
This model is a fine-tuned version of [kravchenko/uk-mt5-base](https://huggingface.co/kravchenko/uk-mt5-base) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7909
- Rouge1: 4.2038
- Rouge2: 0.6736
- Rougel: 4.1229
- Rougelsum: 4.1353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 2.871 | 1.0 | 7201 | 1.9992 | 3.157 | 0.5155 | 3.1283 | 3.1298 |
| 2.3902 | 2.0 | 14402 | 1.9162 | 3.6231 | 0.595 | 3.5878 | 3.6125 |
| 2.2273 | 3.0 | 21603 | 1.8681 | 3.8688 | 0.5949 | 3.8101 | 3.8106 |
| 2.1219 | 4.0 | 28804 | 1.8264 | 3.7935 | 0.58 | 3.741 | 3.7647 |
| 2.0448 | 5.0 | 36005 | 1.8062 | 3.9388 | 0.7156 | 3.8877 | 3.9098 |
| 1.9898 | 6.0 | 43206 | 1.8077 | 4.3916 | 0.8113 | 4.3133 | 4.327 |
| 1.9483 | 7.0 | 50407 | 1.7935 | 4.2474 | 0.7119 | 4.1732 | 4.197 |
| 1.9209 | 8.0 | 57608 | 1.7909 | 4.2038 | 0.6736 | 4.1229 | 4.1353 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
mosuhy/llm-tolkien-llama_2_7B_local | mosuhy | 2023-10-18T07:16:49Z | 3 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-10-18T07:16:40Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
hung200504/distilbert-4 | hung200504 | 2023-10-18T07:15:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:OnePoint16/distilbert-medical-question_answer",
"base_model:finetune:OnePoint16/distilbert-medical-question_answer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-18T07:15:34Z | ---
license: apache-2.0
base_model: OnePoint16/distilbert-medical-question_answer
tags:
- generated_from_trainer
model-index:
- name: distilbert-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-4
This model is a fine-tuned version of [OnePoint16/distilbert-medical-question_answer](https://huggingface.co/OnePoint16/distilbert-medical-question_answer) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Thuwanon/bert-finetuned-mrpc | Thuwanon | 2023-10-18T07:12:40Z | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-15T07:15:40Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: bert-finetuned-mrpc
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1377, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.34.0
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
sabrinah/BERT-SQuAD | sabrinah | 2023-10-18T07:01:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-18T01:14:16Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: PoA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PoA
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6105
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.2729 |
| 2.6589 | 2.0 | 500 | 1.6600 |
| 2.6589 | 3.0 | 750 | 1.6105 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cpu
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Mingfei0830/save_model | Mingfei0830 | 2023-10-18T06:52:04Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-10-18T06:00:15Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Mingfei0830/save_model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
gokuls/hBERTv2_new_pretrain_w_init_48_ver2_stsb | gokuls | 2023-10-18T06:52:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-18T06:42:50Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: hBERTv2_new_pretrain_w_init_48_ver2_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.19761262239980293
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_w_init_48_ver2_stsb
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2194
- Pearson: 0.2187
- Spearmanr: 0.1976
- Combined Score: 0.2081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.3584 | 1.0 | 90 | 2.3085 | 0.1702 | 0.1471 | 0.1586 |
| 2.0513 | 2.0 | 180 | 2.4060 | 0.1479 | 0.1342 | 0.1411 |
| 1.9851 | 3.0 | 270 | 2.4888 | 0.0897 | 0.1163 | 0.1030 |
| 1.8287 | 4.0 | 360 | 2.7571 | 0.1643 | 0.1827 | 0.1735 |
| 1.6845 | 5.0 | 450 | 2.2194 | 0.2187 | 0.1976 | 0.2081 |
| 1.6892 | 6.0 | 540 | 2.4431 | 0.1882 | 0.1858 | 0.1870 |
| 1.5272 | 7.0 | 630 | 2.6124 | 0.1433 | 0.1572 | 0.1503 |
| 1.402 | 8.0 | 720 | 2.8100 | 0.1605 | 0.1671 | 0.1638 |
| 1.3122 | 9.0 | 810 | 2.7081 | 0.1298 | 0.1428 | 0.1363 |
| 1.187 | 10.0 | 900 | 2.8638 | 0.1724 | 0.1825 | 0.1775 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
qgyd2021/sft_llama2_stack_exchange | qgyd2021 | 2023-10-18T06:43:57Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"pytorch",
"llama",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2023-10-16T08:57:05Z | ---
license: apache-2.0
language:
- en
library_name: adapter-transformers
---
I followed [this script](https://github.com/huggingface/trl/blob/main/examples/research_projects/stack_llama_2/scripts/sft_llama2.py) to train this model.
instead of the official [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) model, I used this repo [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf).
The model trained on [lvwerra/stack-exchange-paired](https://huggingface.co/datasets/lvwerra/stack-exchange-paired) dataset.
seq_length: 1024
steps: 1600
|
usman7071/my-car-model | usman7071 | 2023-10-18T06:43:23Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-10-18T06:37:52Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### my-car-model Dreambooth model trained by usman7071 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
.jpg)
|
gokuls/hBERTv1_new_pretrain_48_ver2_qqp | gokuls | 2023-10-18T06:39:59Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v1_complete_training_new_48",
"base_model:finetune:gokuls/bert_12_layer_model_v1_complete_training_new_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-18T02:10:04Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v1_complete_training_new_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv1_new_pretrain_48_ver2_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.7412317585951026
- name: F1
type: f1
value: 0.6035319084432319
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_ver2_qqp
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5524
- Accuracy: 0.7412
- F1: 0.6035
- Combined Score: 0.6724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5561 | 1.0 | 5686 | 0.5524 | 0.7412 | 0.6035 | 0.6724 |
| 0.5673 | 2.0 | 11372 | 0.6397 | 0.6318 | 0.0 | 0.3159 |
| 0.6117 | 3.0 | 17058 | 0.6165 | 0.6692 | 0.4617 | 0.5654 |
| 0.64 | 4.0 | 22744 | 0.6586 | 0.6318 | 0.0 | 0.3159 |
| 0.6592 | 5.0 | 28430 | 0.6584 | 0.6318 | 0.0 | 0.3159 |
| 0.659 | 6.0 | 34116 | 0.6582 | 0.6318 | 0.0 | 0.3159 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
gokuls/hBERTv2_new_pretrain_w_init_48_ver2_qqp | gokuls | 2023-10-18T06:39:44Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-18T02:26:35Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv2_new_pretrain_w_init_48_ver2_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.7573831313381153
- name: F1
type: f1
value: 0.6486622013682438
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_w_init_48_ver2_qqp
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5073
- Accuracy: 0.7574
- F1: 0.6487
- Combined Score: 0.7030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5438 | 1.0 | 5686 | 0.5073 | 0.7574 | 0.6487 | 0.7030 |
| 0.5215 | 2.0 | 11372 | 0.5411 | 0.7379 | 0.6475 | 0.6927 |
| 0.5467 | 3.0 | 17058 | 0.6578 | 0.6323 | 0.0047 | 0.3185 |
| 0.5441 | 4.0 | 22744 | 0.5636 | 0.7429 | 0.5943 | 0.6686 |
| 0.5524 | 5.0 | 28430 | 0.5958 | 0.7216 | 0.5353 | 0.6284 |
| 0.5635 | 6.0 | 34116 | 0.5578 | 0.7358 | 0.5946 | 0.6652 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
hankokk/Taxi-v3 | hankokk | 2023-10-18T06:39:21Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-10-18T06:39:20Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="hankokk/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mosuhy/llm-tolkien-llama_2_7B | mosuhy | 2023-10-18T05:58:01Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-10-18T05:57:53Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
jfelgate/poca-SoccerTwos | jfelgate | 2023-10-18T05:55:35Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-10-17T21:15:03Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jfelgate/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
acarp3422/AnythingsPossible | acarp3422 | 2023-10-18T05:47:29Z | 0 | 0 | null | [
"arxiv:1910.09700",
"license:mit",
"region:us"
]
| null | 2023-09-29T04:12:59Z | ---
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hpandana/dqn-SpaceInvadersNoFrameskip-v4 | hpandana | 2023-10-18T05:45:10Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-10-18T05:44:32Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 583.00 +/- 150.87
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hpandana -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hpandana -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga hpandana
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ZiaPratama/Yolov8_Pothole | ZiaPratama | 2023-10-18T05:38:37Z | 0 | 1 | null | [
"object-detection",
"en",
"region:us"
]
| object-detection | 2023-10-18T05:31:10Z | ---
language:
- en
pipeline_tag: object-detection
---
This Dataset Training Model is from https://www.dropbox.com/s/qvglw8pqo16769f/pothole_dataset_v8.zip?dl=1. The Model Pre-trained used is Yolo V8. The transfered learning model detect the pot hole on the way. |
hung200504/electra-finetuned-cpgqa | hung200504 | 2023-10-18T05:24:56Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:deepset/electra-base-squad2",
"base_model:finetune:deepset/electra-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-18T05:24:37Z | ---
license: cc-by-4.0
base_model: deepset/electra-base-squad2
tags:
- generated_from_trainer
model-index:
- name: electra-finetuned-cpgqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-finetuned-cpgqa
This model is a fine-tuned version of [deepset/electra-base-squad2](https://huggingface.co/deepset/electra-base-squad2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
djimbe/my_awesome_billsum_model | djimbe | 2023-10-18T05:20:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:indosum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-10-16T06:55:00Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- indosum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: indosum
type: indosum
config: indosum_fold0_source
split: test
args: indosum_fold0_source
metrics:
- name: Rouge1
type: rouge
value: 0.2065
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the indosum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4806
- Rouge1: 0.2065
- Rouge2: 0.1639
- Rougel: 0.2038
- Rougelsum: 0.2038
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.7495 | 1.0 | 892 | 0.5226 | 0.2061 | 0.1635 | 0.2033 | 0.2033 | 19.0 |
| 0.5326 | 2.0 | 1784 | 0.4929 | 0.2063 | 0.1639 | 0.2037 | 0.2037 | 19.0 |
| 0.4982 | 3.0 | 2676 | 0.4840 | 0.2065 | 0.1639 | 0.2038 | 0.2037 | 19.0 |
| 0.4958 | 4.0 | 3568 | 0.4806 | 0.2065 | 0.1639 | 0.2038 | 0.2038 | 19.0 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
gyr66/RoBERTa-finetuned-privacy-detection | gyr66 | 2023-10-18T05:11:55Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"zh",
"dataset:gyr66/privacy_detection",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-10-16T15:25:06Z | ---
language:
- zh
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- gyr66/privacy_detection
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: RoBERTa-finetuned-privacy-detection
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: gyr66/privacy_detection
type: gyr66/privacy_detection
config: privacy_detection
split: train
args: privacy_detection
metrics:
- name: Precision
type: precision
value: 0.6168845082494108
- name: Recall
type: recall
value: 0.7248237663645518
- name: F1
type: f1
value: 0.6665123278157193
- name: Accuracy
type: accuracy
value: 0.9061190926862569
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-finetuned-privacy-detection
This model is a fine-tuned version of [gyr66/RoBERTa-finetuned-privacy-detection](https://huggingface.co/gyr66/RoBERTa-finetuned-privacy-detection) on the gyr66/privacy_detection dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3534
- Precision: 0.6169
- Recall: 0.7248
- F1: 0.6665
- Accuracy: 0.9061
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 56
- eval_batch_size: 56
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2027 | 1.0 | 36 | 0.3485 | 0.5913 | 0.7273 | 0.6523 | 0.9030 |
| 0.1652 | 2.0 | 72 | 0.3534 | 0.6153 | 0.7314 | 0.6684 | 0.9053 |
| 0.143 | 3.0 | 108 | 0.3534 | 0.6169 | 0.7248 | 0.6665 | 0.9061 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.2
|
uukuguy/Mistral-7B-OpenOrca-lora-merged | uukuguy | 2023-10-18T05:06:18Z | 0 | 1 | peft | [
"peft",
"pytorch",
"mistral",
"Mistral",
"text-generation",
"en",
"license:llama2",
"model-index",
"region:us"
]
| text-generation | 2023-10-16T10:21:44Z | ---
language:
- en
library_name: peft
pipeline_tag: text-generation
tags:
- Mistral
license: llama2
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.0
verified: false
---
# Mistral-7B-OpenOrca-lora-merged
**This is a test.**
This is a regenerated model that combines the base model Mistral-7B-v0.1 with the LoRA model [Mistral-7B-OpenOrca-lora](https://huggingface.co/uukuguy/Mistral-7B-OpenOrca-lora).
This LoRA model is extracted from the efficient parameter fine-tuned model ([Mistral-7B-OpenOra](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)), and now it needs to be verified whether this LoRA model can achieve comparable performance with the original model.
The final goal is to create a toolkit that can simultaneously load multiple LoRA modules, and automatically switch to the appropriate combination of LoRA modules based on user queries to generate the best answer.
The source code is [here](https://github.com/uukuguy/multi_loras)
## Mistral-7B-OpenOrca
- Extract lora model [Mistral-7B-OpenOrca-lora](https://huggingface.co/uukuguy/Mistral-7B-OpenOrca-lora) from [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca);
- Merge the base model [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) with lora model to [Mistral-7B-OpenOrca-lora-merged](https://huggingface.co/uukuguy/Mistral-7B-OpenOrca-lora-merged)
- LLM Evaluation ...
### Local Test
| | ARC_acc_norm (25-shot) | HellaSwag_acc_norm (10-shot) | MMLU_acc (5-shot) | TruthfulQA_mc2 (0-shot) | GSM8K_acc (8-shot) | Open LLM Score |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| Mistral-7B-OpenOrca | **71** | 83 | 61.42 | 45 | 40 | 65.11 |
| **r=256** | 68 | **84** | **64.28** | 46.953 | **41** | **65.81** |
| r=64 | 67 | 84 | 64.26 | **47.32** | **41** | 65.65 |
| *r=16* | *65* | *83* | *62.84* | *46.95* | *38* | *64.45* |
### Open LLM Leaderboard
| | ARC_acc_norm (25-shot) | HellaSwag_acc_norm (10-shot) | MMLU_acc (5-shot) | TruthfulQA_mc2 (0-shot) | Open LLM Score |
| ------ | ------ | ------ | ------ | ------ | ------ |
| Mistral-7B-SlimOrca | 62.54 | 83.86 | **62.77** | **54.23** | **65.85** |
| Mistral-7B-OpenOrca | **64.08** | **83.99** | 62.24 | 53.05 | 65.84 |
## lm-evaluation-harness
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Mistral-7B-OpenOrca | Mistral-7B-OpenOrca-lora| Mistral-7B-OpenOrca-lora-merged |
| --- | --- |--- | --- |
| ARC | 64.08 | | |
| HellaSwag | 83.99 | | |
| MMLU | 62.24 | | |
| TruthfulQA | 53.05 | | |
| Average | 65.84 | | |
## HumanEval
| Metric | Mistral-7B-OpenOrca | Mistral-7B-OpenOrca-lora| Mistral-7B-OpenOrca-lora-merged |
| --- | --- | --- | --- |
| humaneval-python | 35.976 | | |
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
hung200504/bert-uncased-finetuned-cpgqa | hung200504 | 2023-10-18T04:57:34Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:twmkn9/bert-base-uncased-squad2",
"base_model:finetune:twmkn9/bert-base-uncased-squad2",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-18T04:57:18Z | ---
base_model: twmkn9/bert-base-uncased-squad2
tags:
- generated_from_trainer
model-index:
- name: bert-uncased-finetuned-cpgqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-finetuned-cpgqa
This model is a fine-tuned version of [twmkn9/bert-base-uncased-squad2](https://huggingface.co/twmkn9/bert-base-uncased-squad2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
fahdsoliman/my_awesome_qa_model | fahdsoliman | 2023-10-18T04:57:26Z | 4 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-16T06:54:05Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: fahdsoliman/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# fahdsoliman/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7409
- Validation Loss: 1.9577
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5728 | 2.3969 | 0 |
| 2.0129 | 1.9577 | 1 |
| 1.7409 | 1.9577 | 2 |
### Framework versions
- Transformers 4.34.0
- TensorFlow 2.12.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
LoneStriker/speechless-code-mistral-7b-v1.0-recalibrate-8.0bpw-h6-exl2 | LoneStriker | 2023-10-18T04:46:58Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"llama-2",
"code",
"en",
"dataset:jondurbin/airoboros-2.2",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:TokenBender/python_eval_instruct_51k",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-18T04:42:17Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- jondurbin/airoboros-2.2
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
- WizardLM/WizardLM_evol_instruct_V2_196k
- TokenBender/python_eval_instruct_51k
tags:
- llama-2
- code
license: llama2
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 50.0
verified: false
---
<p><h1> speechless-code-mistral-7b-v1.0 </h1></p>
### NOTE: Requantized using WizardLM_evol_instruct_V2_196k for calibration
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GGUF)
Use the following dataset to fine-tune mistralai/Mistral-7B-v0.1 in order to improve the model's reasoning and planning abilities.
Total 201,981 samples.
- jondurbin/airoboros-2.2: Filter categories related to coding, reasoning and planning. 23,462 samples.
- Open-Orca/OpenOrca: Filter the 'cot' category in 1M GPT4 dataset. 74,440 samples.
- garage-bAInd/Open-Platypus: 100%, 24,926 samples.
- WizardLM/WizardLM_evol_instruct_V2_196k: Coding coversation part. 30,185 samples
- TokenBender/python_eval_instruct_51k: “python” in output .40,309 samples
- Spider: 8,659 samples
## HumanEval
| Metric | Value |
| --- | --- |
| humaneval-python | 50.0|
[Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
CodeLlama-34B-Python: 53.29
CodeLlama-34B-Instruct: 50.79
CodeLlama-13B-Instruct: 50.6
CodeLlama-34B: 45.11
CodeLlama-13B-Python: 42.89
CodeLlama-13B: 35.07
## lm-evaluation-harness
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC |59.64 |
| HellaSwag |82.25 |
| MMLU | 61.33 |
| TruthfulQA | 48.45 |
| Average | 62.92 |
## Parameters
| | |
|------ | ------ |
| lr | 2e-4 |
| lr_scheduler_type | cosine |
| weight_decay | 0.0 |
| optim | paged_adamw_8bit |
| flash_attention | True |
| rerope | False |
| max_new_tokens | 4096 |
| num_train_epochs | 2 |
| bits | 4 |
| lora_r | 64 |
| lora_alpha | 16 |
| lora_dropout | 0.05 |
| double_quant | True |
| quant_type | nf4 |
| dataset_format | airoboros |
| mini_batch_size | 2 |
| grandient_accumulation_steps | 32 |
| bf16 | True |
A40-48G x 2
| | |
|------ | ------ |
| epoch | 2.0 |
| etrain_loss | 0.5 |
| etrain_runtime | 1 day, 10:25:26.77 |
| etrain_samples_per_second | 3.194 |
| etrain_steps_per_second | 0.025 |
| eeval_loss | 0.5146 |
| eeval_runtime | 0:00:25.04 |
| eeval_samples_per_second | 7.985 |
| eeval_steps_per_second | |
|
peteryushunli/distilbert-base-uncased-finetuned-rap-lyrics-v1 | peteryushunli | 2023-10-18T04:25:19Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-08-30T01:27:58Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-rap-lyrics-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-rap-lyrics-v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.977 | 1.0 | 1258 | 1.9930 |
| 1.9568 | 2.0 | 2516 | 1.9718 |
| 1.947 | 3.0 | 3774 | 1.9477 |
| 1.9445 | 4.0 | 5032 | 1.9329 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
allenai/specter | allenai | 2023-10-18T04:19:07Z | 62,408 | 60 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"en",
"dataset:SciDocs",
"arxiv:2004.07180",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: "https://camo.githubusercontent.com/7d080b7a769f7fdf64ac0ebeb47b039cb50be35287e3071f9d633f0fe33e7596/68747470733a2f2f692e6962622e636f2f33544331576d472f737065637465722d6c6f676f2d63726f707065642e706e67"
license: apache-2.0
datasets:
- SciDocs
metrics:
- F1
- accuracy
- map
- ndcg
---
## SPECTER
SPECTER is a pre-trained language model to generate document-level embedding of documents. It is pre-trained on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, SPECTER can be easily applied to downstream applications without task-specific fine-tuning.
If you're coming here because you want to embed papers, SPECTER has now been superceded by [SPECTER2](https://huggingface.co/allenai/specter2_proximity). Use that instead.
Paper: [SPECTER: Document-level Representation Learning using Citation-informed Transformers](https://arxiv.org/pdf/2004.07180.pdf)
Original Repo: [Github](https://github.com/allenai/specter)
Evaluation Benchmark: [SciDocs](https://github.com/allenai/scidocs)
Authors: *Arman Cohan, Sergey Feldman, Iz Beltagy, Doug Downey, Daniel S. Weld*
|
openaccess-ai-collective/neft-exp1 | openaccess-ai-collective | 2023-10-18T04:16:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-18T03:54:48Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# out
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9422 | 0.02 | 1 | 1.0091 |
| 1.0215 | 0.2 | 13 | 1.0004 |
| 0.9933 | 0.41 | 26 | 1.0071 |
| 0.9197 | 0.61 | 39 | 1.0136 |
| 0.9285 | 0.81 | 52 | 1.0075 |
| 0.5858 | 1.02 | 65 | 1.0082 |
| 0.5522 | 1.22 | 78 | 1.0546 |
| 0.4992 | 1.42 | 91 | 1.0683 |
| 0.6085 | 1.62 | 104 | 1.0638 |
| 0.5118 | 1.83 | 117 | 1.0654 |
| 0.3243 | 2.03 | 130 | 1.1113 |
| 0.3196 | 2.23 | 143 | 1.1957 |
| 0.2582 | 2.44 | 156 | 1.2038 |
| 0.273 | 2.64 | 169 | 1.1949 |
| 0.2818 | 2.84 | 182 | 1.2000 |
| 0.1427 | 3.05 | 195 | 1.2817 |
| 0.1246 | 3.25 | 208 | 1.3245 |
| 0.1394 | 3.45 | 221 | 1.3561 |
| 0.1088 | 3.66 | 234 | 1.3770 |
| 0.0985 | 3.86 | 247 | 1.3731 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.14.0
|
Rabul/Le | Rabul | 2023-10-18T04:02:28Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"finance",
"text-classification",
"ae",
"dataset:lmsys/lmsys-chat-1m",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-10-18T04:01:32Z | ---
license: apache-2.0
datasets:
- lmsys/lmsys-chat-1m
language:
- ae
metrics:
- bertscore
library_name: adapter-transformers
pipeline_tag: text-classification
tags:
- finance
--- |
Afishally/my_awesome_eli5_mlm_model | Afishally | 2023-10-18T03:54:53Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:38:29Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2574 | 1.0 | 1141 | 2.0525 |
| 2.1639 | 2.0 | 2282 | 2.0132 |
| 2.118 | 3.0 | 3423 | 1.9563 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
vulture/my_awesome_eli5_mlm_model | vulture | 2023-10-18T03:53:26Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:33:53Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2346 | 1.0 | 1127 | 2.1004 |
| 2.1562 | 2.0 | 2254 | 2.0598 |
| 2.1183 | 3.0 | 3381 | 2.0245 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Pssssss/my_awesome_eli5_mlm_model | Pssssss | 2023-10-18T03:52:36Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:34:46Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2677 | 1.0 | 1132 | 2.0885 |
| 2.1499 | 2.0 | 2264 | 2.0546 |
| 2.1333 | 3.0 | 3396 | 2.0309 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
ghzc/my_awesome_eli5_mlm_model | ghzc | 2023-10-18T03:51:43Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:34:04Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2446 | 1.0 | 1143 | 2.0583 |
| 2.1637 | 2.0 | 2286 | 2.0377 |
| 2.1135 | 3.0 | 3429 | 2.0078 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Cecilia0409/my_awesome_eli5_mlm_model | Cecilia0409 | 2023-10-18T03:51:19Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:35:04Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2236 | 1.0 | 1138 | 2.0770 |
| 2.1478 | 2.0 | 2276 | 2.0293 |
| 2.1061 | 3.0 | 3414 | 2.0344 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
NYP-J/my_awesome_eli5_mlm_model | NYP-J | 2023-10-18T03:50:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:35:48Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2456 | 1.0 | 1141 | 2.0617 |
| 2.1599 | 2.0 | 2282 | 2.0269 |
| 2.1218 | 3.0 | 3423 | 1.9757 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Obanana/my_awesome_eli5_mlm_model | Obanana | 2023-10-18T03:50:51Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:34:09Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2687 | 1.0 | 1137 | 2.0715 |
| 2.1714 | 2.0 | 2274 | 2.0012 |
| 2.1324 | 3.0 | 3411 | 1.9764 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
PHILANDER/my_awesome_eli5_mlm_model | PHILANDER | 2023-10-18T03:50:42Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:33:44Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2415 | 1.0 | 1146 | 2.0722 |
| 2.159 | 2.0 | 2292 | 2.0261 |
| 2.1127 | 3.0 | 3438 | 2.0136 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Nateile/my_awesome_eli5_mlm_model | Nateile | 2023-10-18T03:49:47Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:33:49Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2406 | 1.0 | 1117 | 2.0649 |
| 2.1558 | 2.0 | 2234 | 2.0260 |
| 2.1023 | 3.0 | 3351 | 2.0075 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
hung200504/cpgqa | hung200504 | 2023-10-18T03:45:39Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-18T03:45:22Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-cpgqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-cpgqa
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Ruri0804/Holicon | Ruri0804 | 2023-10-18T03:39:11Z | 0 | 4 | null | [
"stable-diffusion",
"text-to-image",
"safetensors",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-10-17T09:51:55Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
- safetensors
---
# Holicon
A series of models based on my personal preferences.
Holicon has no actual meaning. Suffix may represent a certain artist or the abbreviation of the models used in merge. All models listed are baked in VAE.
## Tips
1. Please feel free to use whatever parameters you want.
2. If you want to get images similar to the examples images or want to know some of my usage of these models, you can refer to introduction.
3. Download the image with a suffix other than 0 to read the parameters.
## Introduction
1. Holicon-F79
Suitable for generating low saturation and more flat images, but it is weaker than other models in terms of prompt response.

2. Holicon-Hiten
Suitable for generating my personal favorite character with slightly juvenile and whiter skin, thanks to the merging of PVC type models.

3. Holicon-mao
This model can generate characters similar to Holicon-Hiten, but the image will be more pinkish. It has better scene generation capabilities. In terms of usage, I recommend using it only with Hires fix. (Automatic1111-stable-diffusion-webui 1.6.0+)

Examples used on Hires fix.
Nordrin_little v2.5 as the first pass and Holicon-mao as Hires fix.

Holicon-F79 as the first pass and Holicon-mao as Hires fix.

## Recommended Settings
Sampler: DPM++ SDE (30 ~ 50 steps)
Hires fix Sampler: Euler a (15 ~ 40 steps)
Upscaler 1: Latent (bicubic antialiased) for more details
Upscaler 2: ScuNET PSNR for cleaner results
Denoising: 0.5 ~ 0.6
CFG: 7 ~ 9
|
luhee/distilhubert-music-classifier-finetuned-gtzan | luhee | 2023-10-18T03:38:16Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-10-17T17:31:23Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
model-index:
- name: distilhubert-music-classifier-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-music-classifier-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
BelleGroup/BELLE-on-Open-Datasets | BelleGroup | 2023-10-18T03:16:37Z | 0 | 12 | null | [
"pytorch",
"text2text-generation",
"zh",
"en",
"arxiv:2304.07854",
"license:gpl-3.0",
"region:us"
]
| text2text-generation | 2023-04-17T10:16:09Z | ---
license: gpl-3.0
tags:
- text2text-generation
pipeline_tag: text2text-generation
language:
- zh
- en
---
Considering LLaMA's license constraints, the model is for research and learning only.
Please strictly respect LLaMA's usage policy. We are not allowed to publish weights for LLaMA, of course, even finetuned, but there is no problem publishing the difference, a patch that we suggest to apply to the files.
The encryption is a simple XOR between files, ensuring that only the people that have access to the original weights (from completely legal sources, of course) can transform them into finetuned weights.
You can find the decrypt code on https://github.com/LianjiaTech/BELLE/tree/main/models .
# Model Card for Model ID
## Welcome
If you find this model helpful, please *like* this model and star us on https://github.com/LianjiaTech/BELLE !
## Model description
We release our model described in the paper
[Towards Better Instruction Following Language Models for Chinese](https://github.com/LianjiaTech/BELLE/blob/main/docs/Towards%20Better%20Instruction%20Following%20Language%20Models%20for%20Chinese.pdf)
This model achieves the best performance comparing other instruction-following models with a score of 0.762 on our evaluation set.

## Download, Convert & Check
1. After you git clone this model
```
md5sum ./*
29db882bdab3131ef05943ee8ba82e2c ./config.json.6375ff434583e14cfc1fd45f9f599ddb9c689cb9b8c542d427dc6d5dc1059037.enc
f9b33d359f17a437f6c24b4de6f2272e ./generation_config.json.fd7ff399e5568cc21a0a8414f43df88ef7c424995b9b97a90563165d2cf79efd.enc
794e28fff16ef8c3fe9e48e3aa6ccf3a ./pytorch_model-00001-of-00002.bin.b552ebc4dd499812cfe1e45ffcaad0ee93851ef83df95eb4f824be53b25e5531.enc
1ab136a4489016c3004e3f04c438f268 ./pytorch_model-00002-of-00002.bin.45adb5c7b91f81b2c03c913f2e52487a0e22663e088063b699c6a903101b7968.enc
0d6db7f247a51589f3dd6d08dbfe64ce ./pytorch_model.bin.index.json.4f08b269e18619675bc3fd62f6efb3a8d59f9d54fa50f5625d0bba7adabaf90e.enc
34696bfce7b27548cfc2410e2b55762e ./special_tokens_map.json.96bdbb8504d9967606e5f661ccc7cbbac44a3661af863a7a58614670a0ccab33.enc
6014cf2235521f974c8d9fb69b6cf07e ./tokenizer_config.json.7078cc180b3d35e7ccd06b49ede4a7fef85f2572bda40c1fe2fc8f9ab25418d3.enc
56724a79091f3d1877cca65c6412d646 ./tokenizer.model.0b716a618c9e7c45648f91d997431eba3b0ff111b17ce7b777280ed771a49f95.enc
```
2. Decrypt the files using the scripts in https://github.com/LianjiaTech/BELLE/tree/main/models
You can use the following command in Bash.
Please replace "/path/to_encrypted" with the path where you stored your encrypted file,
replace "/path/to_original_llama_7B" with the path where you stored your original llama7B file,
and replace "/path/to_finetuned_model" with the path where you want to save your final trained model.
```bash
mkdir /path/to_finetuned_model
for f in "/path/to_encrypted"/*; \
do if [ -f "$f" ]; then \
python3 decrypt.py "$f" "/path/to_original_llama_7B/consolidated.00.pth" "/path/to_finetuned_model/"; \
fi; \
done
```
After executing the aforementioned command, you will obtain the following files.
```
./config.json
./generation_config.json
./pytorch_model-00001-of-00002.bin
./pytorch_model-00002-of-00002.bin
./pytorch_model.bin.index.json
./special_tokens_map.json
./tokenizer_config.json
./tokenizer.model
```
3. Check md5sum
You can verify the integrity of these files by performing an MD5 checksum to ensure their complete recovery.
Here are the MD5 checksums for the relevant files:
```
md5sum ./*
139cb9dc0065bd878b277860c70add74 ./config.json
2917a1cafb895cf57e746cfd7696bfe5 ./generation_config.json
2f6cce3296b6bfeb8beb1629bf07dfe9 ./pytorch_model-00001-of-00002.bin
8fe5b4ad70788b3a6086ef28709a8730 ./pytorch_model-00002-of-00002.bin
e5385004e4876ea6b93d6126e845a82f ./pytorch_model.bin.index.json
15f7a943faa91a794f38dd81a212cb01 ./special_tokens_map.json
08f6f621dba90b2a23c6f9f7af974621 ./tokenizer_config.json
6ffe559392973a92ea28032add2a8494 ./tokenizer.model
```
## Use model
Please note that the input should be formatted as follows in both **training** and **inference**.
``` python
Human: {input} \n\nAssistant:
```
In order to load BELLE-LLAMA-7B-2M-enc with huggingface transformers, please install the main version, as the latest stable version doesn't support LLAMA (as of March 26, 2023).
``` python
pip install git+https://github.com/huggingface/transformers
```
After you decrypt the files, BELLE-LLAMA-7B-2M can be easily loaded with LlamaForCausalLM.
``` python
from transformers import LlamaForCausalLM, AutoTokenizer
import torch
ckpt = '/path/to_finetuned_model/'
device = torch.device('cuda')
model = LlamaForCausalLM.from_pretrained(ckpt, device_map='auto', low_cpu_mem_usage=True)
tokenizer = AutoTokenizer.from_pretrained(ckpt)
prompt = "Human: 写一首中文歌曲,赞美大自然 \n\nAssistant: "
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generate_ids = model.generate(input_ids, max_new_tokens=300, do_sample = True, top_k = 30, top_p = 0.85, temperature = 0.5,repetition_penalty=1.2, eos_token_id=2, bos_token_id=1, pad_token_id=0)
output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
response = output[len(prompt):]
print(response)
```
## Limitations
There still exists a few issues in the model trained on current base model and data:
1. The model might generate factual errors when asked to follow instructions related to facts.
2. Occasionally generates harmful responses since the model still struggles to identify potential harmful instructions.
3. Needs improvements on reasoning and coding.
Since the model still has its limitations, we require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.
## Citation
Please cite our paper and github when using our code, data or model.
```
@misc{ji2023better,
title={Towards Better Instruction Following Language Models for Chinese: Investigating the Impact of Training Data and Evaluation},
author={Yunjie Ji and Yan Gong and Yong Deng and Yiping Peng and Qiang Niu and Baochang Ma and Xiangang Li},
year={2023},
eprint={2304.07854},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{BELLE,
author = {BELLEGroup},
title = {BELLE: Be Everyone's Large Language model Engine},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LianjiaTech/BELLE}},
}
``` |
darkmegahot/poca-SoccerTwos | darkmegahot | 2023-10-18T03:12:03Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-10-18T03:11:53Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: darkmegahot/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
guocheng66/a2c-PandaReachDense-v3 | guocheng66 | 2023-10-18T03:11:08Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-10-18T03:05:19Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.18 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Weni/WeniGPT-Mistral-7B-instructBase | Weni | 2023-10-18T02:39:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-17T13:03:28Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.1
tags:
- generated_from_trainer
model-index:
- name: WeniGPT-Mistral-7B-instructBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# WeniGPT-Mistral-7B-instructBase
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.0
- Datasets 2.13.0
- Tokenizers 0.14.1
|
SaiedAlshahrani/bloom_3B_8bit_qlora_flores | SaiedAlshahrani | 2023-10-18T02:29:53Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:asas-ai/bloom_3B_8bit",
"base_model:finetune:asas-ai/bloom_3B_8bit",
"region:us"
]
| null | 2023-10-18T01:28:26Z | ---
base_model: asas-ai/bloom_3B_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_3B_8bit_qlora_flores
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_3B_8bit_qlora_flores
This model is a fine-tuned version of [asas-ai/bloom_3B_8bit](https://huggingface.co/asas-ai/bloom_3B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.4.0
- Tokenizers 0.14.1
|
mangoxb/tangled3 | mangoxb | 2023-10-18T02:23:28Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-10-18T02:18:17Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of qzx rapunzel or vmn flynn
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - mangoxb/tangled3
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of qzx rapunzel or vmn flynn using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
gokuls/hBERTv1_new_pretrain_48_ver2_qnli | gokuls | 2023-10-18T02:08:17Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v1_complete_training_new_48",
"base_model:finetune:gokuls/bert_12_layer_model_v1_complete_training_new_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-17T23:35:26Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v1_complete_training_new_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_ver2_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5053999633900788
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_ver2_qnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6931
- Accuracy: 0.5054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6982 | 1.0 | 1637 | 0.6940 | 0.5054 |
| 0.6941 | 2.0 | 3274 | 0.6932 | 0.4946 |
| 0.6938 | 3.0 | 4911 | 0.6933 | 0.4946 |
| 0.6936 | 4.0 | 6548 | 0.6931 | 0.5054 |
| 0.6934 | 5.0 | 8185 | 0.6936 | 0.4946 |
| 0.6934 | 6.0 | 9822 | 0.6936 | 0.4946 |
| 0.6934 | 7.0 | 11459 | 0.6931 | 0.5054 |
| 0.6932 | 8.0 | 13096 | 0.6931 | 0.4946 |
| 0.6932 | 9.0 | 14733 | 0.6935 | 0.5054 |
| 0.6932 | 10.0 | 16370 | 0.6932 | 0.4946 |
| 0.6932 | 11.0 | 18007 | 0.6931 | 0.5054 |
| 0.6932 | 12.0 | 19644 | 0.6932 | 0.4946 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
codys12/MergeLlama-7b | codys12 | 2023-10-18T02:04:35Z | 13 | 2 | peft | [
"peft",
"pytorch",
"llama",
"text-generation",
"dataset:codys12/MergeLlama",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
]
| text-generation | 2023-10-11T20:49:26Z | ---
library_name: peft
base_model: codellama/CodeLlama-7b-hf
license: llama2
datasets:
- codys12/MergeLlama
pipeline_tag: text-generation
---
# Model Card for Model ID
Automated merge conflict resolution
## Model Details
Peft finetune of CodeLlama-7b
### Model Description
- **Developed by:** DreamcatcherAI
- **License:** llama2
- **Finetuned from model [optional]:** CodeLlama-7b
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** codys12/MergeLlama-7b
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
Input should be formatted as
```
<<<<<<<
Current change
=======
Incoming change
>>>>>>>
```
MergeLlama will provide the resolution.
You can chain multiple conflicts/resolutions for improved context
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0 |
foreverip/poca-SoccerTwos | foreverip | 2023-10-18T01:40:21Z | 65 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-10-18T01:25:53Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: foreverip/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Tatvajsh/Lllama_AHS_V_7.1 | Tatvajsh | 2023-10-18T01:30:21Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:finetune:openlm-research/open_llama_3b_v2",
"license:apache-2.0",
"region:us"
]
| null | 2023-10-17T22:10:06Z | ---
license: apache-2.0
base_model: openlm-research/open_llama_3b_v2
tags:
- generated_from_trainer
model-index:
- name: Lllama_AHS_V_7.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Lllama_AHS_V_7.1
This model is a fine-tuned version of [openlm-research/open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-09
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
kaoriLeo/sd-class-butterflies-32 | kaoriLeo | 2023-10-18T01:27:38Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2023-10-18T01:26:01Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('kaoriLeo/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
yesj1234/mbart-mmt_mid2_ko-en | yesj1234 | 2023-10-18T00:59:42Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"ko",
"en",
"base_model:facebook/mbart-large-50-many-to-many-mmt",
"base_model:finetune:facebook/mbart-large-50-many-to-many-mmt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-10-18T00:52:03Z | ---
language:
- ko
- en
base_model: facebook/mbart-large-50-many-to-many-mmt
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: ko-en_mbartLarge_mid2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ko-en_mbartLarge_mid2
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3246
- Bleu: 22.9623
- Gen Len: 18.7197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.5377 | 0.23 | 2000 | 1.6122 | 17.2009 | 18.7106 |
| 1.3891 | 0.46 | 4000 | 1.5059 | 19.3345 | 18.7688 |
| 1.2812 | 0.7 | 6000 | 1.4348 | 20.6032 | 18.9022 |
| 1.2374 | 0.93 | 8000 | 1.4035 | 21.2391 | 18.8434 |
| 1.1734 | 1.16 | 10000 | 1.4039 | 21.304 | 18.9964 |
| 1.1531 | 1.39 | 12000 | 1.3694 | 21.9087 | 18.8573 |
| 1.1158 | 1.62 | 14000 | 1.3574 | 22.004 | 18.5485 |
| 1.0941 | 1.86 | 16000 | 1.3457 | 21.9785 | 18.7119 |
| 0.9809 | 2.09 | 18000 | 1.3495 | 22.7983 | 18.8011 |
| 0.9834 | 2.32 | 20000 | 1.3429 | 22.5654 | 18.9416 |
| 0.9981 | 2.55 | 22000 | 1.3246 | 22.9493 | 18.7364 |
| 1.0074 | 2.78 | 24000 | 1.3539 | 22.3874 | 18.4428 |
| 0.9752 | 3.02 | 26000 | 1.3587 | 22.1907 | 18.8139 |
| 0.8858 | 3.25 | 28000 | 1.3457 | 22.82 | 18.8021 |
| 0.8895 | 3.48 | 30000 | 1.3603 | 22.1575 | 18.5638 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
teslanando/ChatterBotQA | teslanando | 2023-10-18T00:57:16Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-18T00:56:49Z | ---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
model-index:
- name: ChatterBotQA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ChatterBotQA
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5899556108621122e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1445 | 1.0 | 750 | 1.9222 |
| 1.6222 | 2.0 | 1500 | 1.6359 |
| 1.1724 | 3.0 | 2250 | 1.6205 |
| 0.9271 | 4.0 | 3000 | 1.6871 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
chargoddard/rpguild-chatml-13b | chargoddard | 2023-10-18T00:41:37Z | 1,416 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"roleplay",
"en",
"dataset:chargoddard/rpguild",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-17T23:57:26Z | ---
datasets:
- chargoddard/rpguild
language:
- en
tags:
- llama
- roleplay
license: cc-by-nc-4.0
---
It's a roleplay model! Trained on ~50k examples of human written roleplay sessions. Not a hint of gpt-flavor writing. Very creative and very able to roll with the punches.
Does tend to speak for you more than it should and has a very, very strong bias towards speech in quotes.
Uses ChatML prompt format, with optional length suffixes:
```
<|im_start|>system
Enter roleplay mode. You are Steve.
Steve is a nasty little man and solves all his problems by punching people in the face.<|im_end|>
<|im_start|>Gundalf, Gandalf's cousin with a gun
Gundalf runs frantically up to Steve and begins yelling. "We have a big problem, Steve!"<|im_end|>
<|im_start|>Steve (Length: short)
"What is it now Gundalf? Do you want me to punch you in the face?" Steve asked, looking down at him. He was pretty sure that this guy wasn't worth the trouble of punching him in the face, but he didn't like to leave loose ends hanging either.
"There's been an incident," Gundalf said quickly, "A dwarf has stolen some gold from Smaug."
"Well I don't care about any of those things," Steve responded, turning away. <|im_end|>
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
gokuls/hBERTv2_new_pretrain_48_ver2_mrpc | gokuls | 2023-10-18T00:41:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_48",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-18T00:26:54Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv2_new_pretrain_48_ver2_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.696078431372549
- name: F1
type: f1
value: 0.7832167832167833
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_ver2_mrpc
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5864
- Accuracy: 0.6961
- F1: 0.7832
- Combined Score: 0.7396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.664 | 1.0 | 58 | 0.6194 | 0.6716 | 0.7481 | 0.7098 |
| 0.6055 | 2.0 | 116 | 0.5864 | 0.6961 | 0.7832 | 0.7396 |
| 0.5319 | 3.0 | 174 | 0.6058 | 0.6838 | 0.7772 | 0.7305 |
| 0.4447 | 4.0 | 232 | 0.7045 | 0.6667 | 0.7679 | 0.7173 |
| 0.3601 | 5.0 | 290 | 0.7750 | 0.6642 | 0.7609 | 0.7126 |
| 0.2754 | 6.0 | 348 | 1.0176 | 0.6789 | 0.7813 | 0.7301 |
| 0.1895 | 7.0 | 406 | 1.4308 | 0.6299 | 0.7229 | 0.6764 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
agoel3705/rl_course_vizdoom_health_gathering_supreme | agoel3705 | 2023-10-18T00:37:11Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-10-18T00:37:03Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.47 +/- 4.73
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r agoel3705/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
gokuls/hBERTv2_new_pretrain_48_ver2_cola | gokuls | 2023-10-18T00:26:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_48",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-18T00:07:33Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv2_new_pretrain_48_ver2_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_ver2_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6182
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6316 | 1.0 | 134 | 0.6287 | 0.0 | 0.6913 |
| 0.6171 | 2.0 | 268 | 0.6182 | 0.0 | 0.6913 |
| 0.6141 | 3.0 | 402 | 0.6182 | 0.0 | 0.6913 |
| 0.613 | 4.0 | 536 | 0.6184 | 0.0 | 0.6913 |
| 0.6112 | 5.0 | 670 | 0.6185 | 0.0 | 0.6913 |
| 0.6127 | 6.0 | 804 | 0.6248 | 0.0 | 0.6913 |
| 0.6109 | 7.0 | 938 | 0.6182 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
hung200504/bert-base-cased | hung200504 | 2023-10-18T00:20:20Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-18T00:20:05Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Taeyeun72/whisper-small | Taeyeun72 | 2023-10-18T00:10:16Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"dataset:arrow",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-10-06T04:34:04Z | ---
language:
- ko
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- arrow
metrics:
- wer
model-index:
- name: whisper-kor3
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: whisper-kor3
type: arrow
config: default
split: train
args: 'config: ko, split: valid'
metrics:
- name: Wer
type: wer
value: 24.690290982425815
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-kor3
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the whisper-kor3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4157
- Wer: 24.6903
- Cer: 11.3851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.2195 | 0.05 | 100 | 1.0198 | 34.4857 | 16.2544 |
| 0.7295 | 0.09 | 200 | 0.7220 | 32.6995 | 14.9684 |
| 0.5236 | 0.14 | 300 | 0.5703 | 31.4463 | 14.2549 |
| 0.4976 | 0.18 | 400 | 0.5461 | 31.8640 | 14.6274 |
| 0.479 | 0.23 | 500 | 0.5296 | 30.4091 | 14.0902 |
| 0.4544 | 0.28 | 600 | 0.5219 | 31.7920 | 16.3916 |
| 0.4672 | 0.32 | 700 | 0.5100 | 30.4955 | 13.9138 |
| 0.4305 | 0.37 | 800 | 0.5043 | 30.1354 | 14.5960 |
| 0.4561 | 0.42 | 900 | 0.4941 | 28.8101 | 13.2513 |
| 0.398 | 0.46 | 1000 | 0.4846 | 31.3166 | 14.2980 |
| 0.4338 | 0.51 | 1100 | 0.4780 | 28.0755 | 12.8945 |
| 0.4121 | 0.55 | 1200 | 0.4728 | 27.4128 | 12.5417 |
| 0.4217 | 0.6 | 1300 | 0.4693 | 28.2772 | 14.4392 |
| 0.3881 | 0.65 | 1400 | 0.4639 | 27.6577 | 13.0082 |
| 0.4035 | 0.69 | 1500 | 0.4593 | 26.9231 | 12.4436 |
| 0.4146 | 0.74 | 1600 | 0.4555 | 28.4212 | 13.7609 |
| 0.3837 | 0.78 | 1700 | 0.4511 | 28.8822 | 13.7845 |
| 0.3969 | 0.83 | 1800 | 0.4485 | 29.2135 | 14.2235 |
| 0.4368 | 0.88 | 1900 | 0.4414 | 26.5918 | 12.1457 |
| 0.3679 | 0.92 | 2000 | 0.4376 | 26.4477 | 12.1770 |
| 0.4496 | 0.97 | 2100 | 0.4335 | 30.1354 | 14.9018 |
| 0.3049 | 1.02 | 2200 | 0.4314 | 26.1164 | 12.9180 |
| 0.2213 | 1.06 | 2300 | 0.4325 | 25.9147 | 11.8046 |
| 0.2732 | 1.11 | 2400 | 0.4303 | 26.0012 | 11.8987 |
| 0.2568 | 1.15 | 2500 | 0.4293 | 25.9291 | 11.7576 |
| 0.2456 | 1.2 | 2600 | 0.4289 | 25.6986 | 11.7066 |
| 0.2702 | 1.25 | 2700 | 0.4262 | 25.8283 | 11.8203 |
| 0.2744 | 1.29 | 2800 | 0.4235 | 25.8139 | 11.8124 |
| 0.2742 | 1.34 | 2900 | 0.4254 | 25.6266 | 11.6360 |
| 0.2798 | 1.39 | 3000 | 0.4238 | 25.5546 | 11.6399 |
| 0.2593 | 1.43 | 3100 | 0.4219 | 26.1020 | 12.4632 |
| 0.2619 | 1.48 | 3200 | 0.4208 | 25.3241 | 11.4714 |
| 0.2633 | 1.52 | 3300 | 0.4210 | 26.6350 | 12.9964 |
| 0.2603 | 1.57 | 3400 | 0.4189 | 25.2809 | 11.4243 |
| 0.2992 | 1.62 | 3500 | 0.4189 | 25.2377 | 11.3969 |
| 0.2453 | 1.66 | 3600 | 0.4176 | 25.2377 | 11.5145 |
| 0.2475 | 1.71 | 3700 | 0.4172 | 24.8487 | 11.3969 |
| 0.2545 | 1.75 | 3800 | 0.4164 | 25.0216 | 11.4596 |
| 0.272 | 1.8 | 3900 | 0.4160 | 24.6471 | 11.2714 |
| 0.2339 | 1.85 | 4000 | 0.4157 | 24.6903 | 11.3851 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_48_ver2_sst2 | gokuls | 2023-10-18T00:07:17Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_48",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-17T22:00:15Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_pretrain_48_ver2_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.805045871559633
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_ver2_sst2
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5059
- Accuracy: 0.8050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.375 | 1.0 | 1053 | 0.5059 | 0.8050 |
| 0.2445 | 2.0 | 2106 | 0.5165 | 0.8028 |
| 0.224 | 3.0 | 3159 | 0.5299 | 0.8119 |
| 0.2046 | 4.0 | 4212 | 0.5749 | 0.8073 |
| 0.202 | 5.0 | 5265 | 0.6168 | 0.8050 |
| 0.2027 | 6.0 | 6318 | 0.5630 | 0.8005 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
hung200504/ditilsBert | hung200504 | 2023-10-18T00:05:16Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-18T00:04:57Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ditilsBert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ditilsBert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
alessiodm/ppo-LunarLander-v2 | alessiodm | 2023-10-18T00:05:02Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-10-17T23:07:57Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.32 +/- 14.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dtorres-zAgile/opt-zc-misti-ft | dtorres-zAgile | 2023-10-17T23:57:55Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-17T05:18:33Z | ---
license: other
base_model: facebook/opt-350m
tags:
- generated_from_trainer
model-index:
- name: opt-zc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-zc
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
wofmanaf/sd-knowledge-model-lora-sdxl-ft-encoder | wofmanaf | 2023-10-17T23:42:04Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-10-17T13:23:46Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
dataset: datasets/knowledge_captions
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - wofmanaf/sd-knowledge-model-lora-sdxl-ft-encoder
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the datasets/knowledge_captions dataset. You can find some example images in the following.




LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
gokuls/hBERTv2_new_pretrain_w_init_48_ver2_cola | gokuls | 2023-10-17T23:41:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-17T23:27:44Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv2_new_pretrain_w_init_48_ver2_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_w_init_48_ver2_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6181
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6346 | 1.0 | 134 | 0.6659 | 0.0 | 0.6913 |
| 0.6202 | 2.0 | 268 | 0.6223 | 0.0 | 0.6913 |
| 0.616 | 3.0 | 402 | 0.6202 | 0.0 | 0.6913 |
| 0.6128 | 4.0 | 536 | 0.6181 | 0.0 | 0.6913 |
| 0.6104 | 5.0 | 670 | 0.6182 | 0.0 | 0.6913 |
| 0.6127 | 6.0 | 804 | 0.6263 | 0.0 | 0.6913 |
| 0.61 | 7.0 | 938 | 0.6182 | 0.0 | 0.6913 |
| 0.6098 | 8.0 | 1072 | 0.6181 | 0.0 | 0.6913 |
| 0.611 | 9.0 | 1206 | 0.6205 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
asas-ai/bloom_560M_4bit_qlora_flores | asas-ai | 2023-10-17T23:38:03Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:asas-ai/bloom_560M_8bit",
"base_model:finetune:asas-ai/bloom_560M_8bit",
"region:us"
]
| null | 2023-10-17T23:37:42Z | ---
base_model: asas-ai/bloom_560M_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_560M_4bit_qlora_flores
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_560M_4bit_qlora_flores
This model is a fine-tuned version of [asas-ai/bloom_560M_8bit](https://huggingface.co/asas-ai/bloom_560M_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.4.0
- Tokenizers 0.14.1
|
SaiedAlshahrani/bloom_560M_4bit_qlora_flores | SaiedAlshahrani | 2023-10-17T23:37:44Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:asas-ai/bloom_560M_8bit",
"base_model:finetune:asas-ai/bloom_560M_8bit",
"region:us"
]
| null | 2023-10-17T23:07:59Z | ---
base_model: asas-ai/bloom_560M_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_560M_4bit_qlora_flores
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_560M_4bit_qlora_flores
This model is a fine-tuned version of [asas-ai/bloom_560M_8bit](https://huggingface.co/asas-ai/bloom_560M_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.4.0
- Tokenizers 0.14.1
|
gokuls/hBERTv2_new_pretrain_w_init_48_ver2_sst2 | gokuls | 2023-10-17T23:27:28Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-17T22:40:02Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_pretrain_w_init_48_ver2_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8119266055045872
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_w_init_48_ver2_sst2
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4256
- Accuracy: 0.8119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3386 | 1.0 | 1053 | 0.4256 | 0.8119 |
| 0.2249 | 2.0 | 2106 | 0.6293 | 0.8085 |
| 0.1865 | 3.0 | 3159 | 0.4738 | 0.7982 |
| 0.1666 | 4.0 | 4212 | 0.5173 | 0.8142 |
| 0.1429 | 5.0 | 5265 | 0.6124 | 0.7982 |
| 0.119 | 6.0 | 6318 | 0.6314 | 0.8062 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
gokuls/hBERTv1_new_pretrain_48_ver2_cola | gokuls | 2023-10-17T23:27:20Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v1_complete_training_new_48",
"base_model:finetune:gokuls/bert_12_layer_model_v1_complete_training_new_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-17T23:16:25Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v1_complete_training_new_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_ver2_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_ver2_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6181
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6454 | 1.0 | 134 | 0.6330 | 0.0 | 0.6913 |
| 0.6173 | 2.0 | 268 | 0.6188 | 0.0 | 0.6913 |
| 0.6141 | 3.0 | 402 | 0.6181 | 0.0 | 0.6913 |
| 0.6147 | 4.0 | 536 | 0.6181 | 0.0 | 0.6913 |
| 0.6134 | 5.0 | 670 | 0.6191 | 0.0 | 0.6913 |
| 0.6112 | 6.0 | 804 | 0.6335 | 0.0 | 0.6913 |
| 0.6114 | 7.0 | 938 | 0.6183 | 0.0 | 0.6913 |
| 0.6095 | 8.0 | 1072 | 0.6181 | 0.0 | 0.6913 |
| 0.6113 | 9.0 | 1206 | 0.6206 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-17T23:23:43Z | 4 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T19:15:06Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-17T23:23:41Z | 7 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T17:35:44Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-17T23:23:38Z | 7 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T19:01:00Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-17T23:23:37Z | 4 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T18:11:22Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-17T23:23:36Z | 3 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T17:21:42Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
hmbert/flair-hipe-2022-hipe2020-fr | hmbert | 2023-10-17T23:23:35Z | 4 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T16:31:58Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-17T23:23:33Z | 6 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T18:46:58Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-17T23:23:32Z | 6 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T17:57:20Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-17T23:23:31Z | 3 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T17:07:37Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-17T23:23:29Z | 4 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T16:17:52Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-17T23:23:01Z | 2 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T13:44:48Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-17T23:23:00Z | 8 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T13:12:52Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-17T23:22:58Z | 9 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T12:09:09Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-17T23:22:57Z | 3 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T14:07:42Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-17T23:22:55Z | 8 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T13:03:49Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-17T23:22:51Z | 9 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T13:58:40Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-17T23:22:47Z | 11 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T12:22:43Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-17T23:22:46Z | 11 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T11:50:36Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-17T23:22:41Z | 8 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T12:15:54Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-17T23:22:40Z | 2 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T11:43:50Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-17T23:22:23Z | 6 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-17T00:01:49Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.8024][1] | [0.7936][2] | [0.8083][3] | [0.8042][4] | [0.8122][5] | 80.41 ± 0.63 |
| bs4-e10-lr3e-05 | [0.791][6] | [0.8143][7] | [0.8017][8] | [0.8065][9] | [0.8065][10] | 80.4 ± 0.77 |
| bs8-e10-lr5e-05 | [0.7974][11] | [0.7983][12] | [0.8092][13] | [0.8094][14] | [0.7828][15] | 79.94 ± 0.98 |
| bs4-e10-lr5e-05 | [0.8058][16] | [0.7966][17] | [0.8033][18] | [0.7889][19] | [0.786][20] | 79.61 ± 0.77 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-17T23:22:21Z | 5 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-16T22:37:55Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.8024][1] | [0.7936][2] | [0.8083][3] | [0.8042][4] | [0.8122][5] | 80.41 ± 0.63 |
| bs4-e10-lr3e-05 | [0.791][6] | [0.8143][7] | [0.8017][8] | [0.8065][9] | [0.8065][10] | 80.4 ± 0.77 |
| bs8-e10-lr5e-05 | [0.7974][11] | [0.7983][12] | [0.8092][13] | [0.8094][14] | [0.7828][15] | 79.94 ± 0.98 |
| bs4-e10-lr5e-05 | [0.8058][16] | [0.7966][17] | [0.8033][18] | [0.7889][19] | [0.786][20] | 79.61 ± 0.77 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-17T23:22:19Z | 4 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-16T21:14:05Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.8024][1] | [0.7936][2] | [0.8083][3] | [0.8042][4] | [0.8122][5] | 80.41 ± 0.63 |
| bs4-e10-lr3e-05 | [0.791][6] | [0.8143][7] | [0.8017][8] | [0.8065][9] | [0.8065][10] | 80.4 ± 0.77 |
| bs8-e10-lr5e-05 | [0.7974][11] | [0.7983][12] | [0.8092][13] | [0.8094][14] | [0.7828][15] | 79.94 ± 0.98 |
| bs4-e10-lr5e-05 | [0.8058][16] | [0.7966][17] | [0.8033][18] | [0.7889][19] | [0.786][20] | 79.61 ± 0.77 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
Subsets and Splits