modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ali2066/finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02 | 204c99011b72a484b6c763dfac69df6b2bbc7ef7 | 2022-02-27T17:40:35.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02 | 5 | null | transformers | 16,900 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3962
- Accuracy: 0.8231
- F1: 0.8873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 |
| No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 |
| 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 |
| 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 |
| 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55 | 9fd3fdf08e332c8fae7a2f69331ca3bc11d43061 | 2022-02-27T17:54:05.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55 | 5 | null | transformers | 16,901 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6049
- Accuracy: 0.6926
- F1: 0.4160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 |
| No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 |
| No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 |
| No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 |
| No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05 | f478430483ba43b56c06e875ae7956b32a5271ae | 2022-02-27T18:01:35.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05 | 5 | null | transformers | 16,902 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6049
- Accuracy: 0.6926
- F1: 0.4160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 |
| No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 |
| No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 |
| No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 |
| No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42 | 639cf081932373c3bd34d89f43502dead4922187 | 2022-02-27T18:11:13.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42 | 5 | null | transformers | 16,903 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4917
- Accuracy: 0.8231
- F1: 0.8833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3883 | 0.8146 | 0.8833 |
| No log | 2.0 | 390 | 0.3607 | 0.8390 | 0.8964 |
| 0.4085 | 3.0 | 585 | 0.3812 | 0.8488 | 0.9042 |
| 0.4085 | 4.0 | 780 | 0.3977 | 0.8549 | 0.9077 |
| 0.4085 | 5.0 | 975 | 0.4233 | 0.8573 | 0.9092 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17 | 71ea8de2a8d395696fb16f67baca4dd96efb88d7 | 2022-02-27T18:16:49.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17 | 5 | null | transformers | 16,904 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4064
- Accuracy: 0.8289
- F1: 0.8901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4163 | 0.8085 | 0.8780 |
| No log | 2.0 | 390 | 0.4098 | 0.8268 | 0.8878 |
| 0.312 | 3.0 | 585 | 0.5892 | 0.8244 | 0.8861 |
| 0.312 | 4.0 | 780 | 0.7580 | 0.8232 | 0.8845 |
| 0.312 | 5.0 | 975 | 0.9028 | 0.8183 | 0.8824 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42 | c2bdac5868f90c4f7ff416e9f3a8273c754153b2 | 2022-02-27T18:42:31.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42 | 5 | null | transformers | 16,905 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0914
- Accuracy: 0.9746
- F1: 0.9870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 104 | 0.0501 | 0.9828 | 0.9913 |
| No log | 2.0 | 208 | 0.0435 | 0.9828 | 0.9913 |
| No log | 3.0 | 312 | 0.0414 | 0.9828 | 0.9913 |
| No log | 4.0 | 416 | 0.0424 | 0.9799 | 0.9898 |
| 0.0547 | 5.0 | 520 | 0.0482 | 0.9828 | 0.9913 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
aryanbhosale/DialoGPT-medium-harrypotter | 530611a9dab90202e60c132da89b4925f9a2e941 | 2022-02-28T05:49:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | aryanbhosale | null | aryanbhosale/DialoGPT-medium-harrypotter | 5 | null | transformers | 16,906 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
ppang/model5 | fc47892ac93302daa9a592f91389ebf8ee818af6 | 2022-02-28T10:54:18.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ppang | null | ppang/model5 | 5 | null | transformers | 16,907 | Entry not found |
frahman/distilbert-base-uncased-finetuned-clinc | 190099e400fafebb150505779e3f89317dbe0676 | 2022-02-28T15:10:11.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | frahman | null | frahman/distilbert-base-uncased-finetuned-clinc | 5 | null | transformers | 16,908 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9187096774193548
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7703
- Accuracy: 0.9187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2887 | 0.7419 |
| 2.6309 | 2.0 | 636 | 1.8797 | 0.8310 |
| 1.5443 | 3.0 | 954 | 1.1537 | 0.8974 |
| 1.0097 | 4.0 | 1272 | 0.8560 | 0.9135 |
| 0.7918 | 5.0 | 1590 | 0.7703 | 0.9187 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
frahman/distilbert-base-uncased-distilled-clinc | b62f3f4de0c22facf5d041a14b0d395ab2240164 | 2022-02-28T15:54:22.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | frahman | null | frahman/distilbert-base-uncased-distilled-clinc | 5 | null | transformers | 16,909 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9406451612903226
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1002
- Accuracy: 0.9406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9039 | 1.0 | 318 | 0.5777 | 0.7335 |
| 0.4486 | 2.0 | 636 | 0.2860 | 0.8768 |
| 0.2528 | 3.0 | 954 | 0.1792 | 0.9210 |
| 0.176 | 4.0 | 1272 | 0.1398 | 0.9274 |
| 0.1417 | 5.0 | 1590 | 0.1209 | 0.9329 |
| 0.1245 | 6.0 | 1908 | 0.1110 | 0.94 |
| 0.1135 | 7.0 | 2226 | 0.1061 | 0.9390 |
| 0.1074 | 8.0 | 2544 | 0.1026 | 0.94 |
| 0.1032 | 9.0 | 2862 | 0.1006 | 0.9410 |
| 0.1017 | 10.0 | 3180 | 0.1002 | 0.9406 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT | acca2f503cf8fccc6562a7c7a7e7380abc320832 | 2022-03-12T11:50:46.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | StivenLancheros | null | StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT | 5 | null | transformers | 16,910 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1720
- Precision: 0.8253
- Recall: 0.8147
- F1: 0.8200
- Accuracy: 0.9660
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the [CRAFT](https://github.com/UCDenver-ccp/CRAFT/releases)(Colorado Richly Annotated Full Text) Corpus in English.
Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1133 | 1.0 | 1360 | 0.1629 | 0.7985 | 0.7782 | 0.7882 | 0.9610 |
| 0.049 | 2.0 | 2720 | 0.1530 | 0.8165 | 0.8084 | 0.8124 | 0.9651 |
| 0.0306 | 3.0 | 4080 | 0.1603 | 0.8198 | 0.8075 | 0.8136 | 0.9650 |
| 0.0158 | 4.0 | 5440 | 0.1720 | 0.8253 | 0.8147 | 0.8200 | 0.9660 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
ali2066/twitter-roberta-base-sentiment_token_itr0_2e-05_all_01_03_2022-04_19_45 | 06d19c7765ef6af7d8603d157accfc48319f35cb | 2022-03-01T03:23:18.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/twitter-roberta-base-sentiment_token_itr0_2e-05_all_01_03_2022-04_19_45 | 5 | null | transformers | 16,911 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter-roberta-base-sentiment_token_itr0_2e-05_all_01_03_2022-04_19_45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-sentiment_token_itr0_2e-05_all_01_03_2022-04_19_45
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2858
- Precision: 0.3206
- Recall: 0.4721
- F1: 0.3819
- Accuracy: 0.8762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3772 | 0.0269 | 0.0326 | 0.0294 | 0.8143 |
| No log | 2.0 | 60 | 0.3052 | 0.2015 | 0.3596 | 0.2583 | 0.8537 |
| No log | 3.0 | 90 | 0.2937 | 0.2737 | 0.4273 | 0.3337 | 0.8722 |
| No log | 4.0 | 120 | 0.2852 | 0.2728 | 0.4348 | 0.3353 | 0.8750 |
| No log | 5.0 | 150 | 0.2676 | 0.2851 | 0.4474 | 0.3483 | 0.8797 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55 | 1a65109d7c58991e8a2106d3d8f0e988f43c6876 | 2022-03-01T12:17:50.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55 | 5 | null | transformers | 16,912 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6168
- Accuracy: 0.8286
- F1: 0.8887
- Precision: 0.8628
- Recall: 0.9162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.3890 | 0.8110 | 0.8749 | 0.8631 | 0.8871 |
| 0.4535 | 2.0 | 780 | 0.3921 | 0.8439 | 0.8984 | 0.8721 | 0.9264 |
| 0.266 | 3.0 | 1170 | 0.4454 | 0.8415 | 0.8947 | 0.8860 | 0.9034 |
| 0.16 | 4.0 | 1560 | 0.5610 | 0.8427 | 0.8957 | 0.8850 | 0.9067 |
| 0.16 | 5.0 | 1950 | 0.6180 | 0.8488 | 0.9010 | 0.8799 | 0.9231 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
coastalcph/fairlex-cail-minilm | 96c5fdef6fdc4d1148c33ee191d7a52026675ebb | 2022-03-01T13:12:22.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"zh",
"transformers",
"legal",
"fairlex",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | coastalcph | null | coastalcph/fairlex-cail-minilm | 5 | null | transformers | 16,913 | ---
language: zh
pipeline_tag: fill-mask
license: cc-by-nc-sa-4.0
tags:
- legal
- fairlex
widget:
- text: "上述事实,被告人在庭审过程中亦无异议,且有<mask>的陈述,现场辨认笔录及照片,被告人的前科刑事判决书,释放证明材料,抓获经过,被告人的供述及身份证明等证据证实,足以认定。"
---
# FairLex: A multilingual benchmark for evaluating fairness in legal text processing
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
## Pre-training details
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
## Models list
| Model name | Training corpora | Language |
|-----------------------------------|------------------|--------------------|
| `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` |
| `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` |
| `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] |
| `coastalcph/fairlex-cail-minlm` | CAIL | `zh` |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-cail-minlm")
model = AutoModel.from_pretrained("coastalcph/fairlex-cail-minlm")
```
## Evaluation on downstream tasks
Consider the experiments in the article:
_Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._
## Author - Publication
```
@inproceedings{chalkidis-2022-fairlex,
author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and
Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders},
title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022},
address={Dublin, Ireland}
}
```
Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) | |
ali2066/finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32 | 3b4a4675fea6cd912bb3346a707ffbdd299dc363 | 2022-03-01T12:31:32.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32 | 5 | null | transformers | 16,914 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4787
- Accuracy: 0.8138
- F1: 0.8785
- Precision: 0.8489
- Recall: 0.9101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.4335 | 0.7732 | 0.8533 | 0.8209 | 0.8883 |
| 0.5141 | 2.0 | 780 | 0.4196 | 0.8037 | 0.8721 | 0.8446 | 0.9015 |
| 0.3368 | 3.0 | 1170 | 0.4519 | 0.8098 | 0.8779 | 0.8386 | 0.9212 |
| 0.2677 | 4.0 | 1560 | 0.4787 | 0.8122 | 0.8785 | 0.8452 | 0.9146 |
| 0.2677 | 5.0 | 1950 | 0.4912 | 0.8146 | 0.8794 | 0.8510 | 0.9097 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/twitter-roberta-base_sentence_itr0_1e-05_all_01_03_2022-13_38_07 | 3e94bce8bb2c53a2f66f401348275884d0c1937d | 2022-03-01T12:47:58.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | ali2066 | null | ali2066/twitter-roberta-base_sentence_itr0_1e-05_all_01_03_2022-13_38_07 | 5 | null | transformers | 16,915 | Entry not found |
ali2066/bert_base_uncased_itr0_0.0001_all_01_03_2022-14_08_15 | 4a63d707050963ae2ea5d27772f7e4f960a75573 | 2022-03-01T13:18:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ali2066 | null | ali2066/bert_base_uncased_itr0_0.0001_all_01_03_2022-14_08_15 | 5 | null | transformers | 16,916 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bert_base_uncased_itr0_0.0001_all_01_03_2022-14_08_15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_uncased_itr0_0.0001_all_01_03_2022-14_08_15
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7632
- Accuracy: 0.8263
- F1: 0.8871
- Precision: 0.8551
- Recall: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.3986 | 0.8305 | 0.8903 | 0.8868 | 0.8938 |
| 0.4561 | 2.0 | 780 | 0.4018 | 0.8439 | 0.9009 | 0.8805 | 0.9223 |
| 0.3111 | 3.0 | 1170 | 0.4306 | 0.8354 | 0.8924 | 0.8974 | 0.8875 |
| 0.1739 | 4.0 | 1560 | 0.5499 | 0.8378 | 0.9002 | 0.8547 | 0.9509 |
| 0.1739 | 5.0 | 1950 | 0.6223 | 0.85 | 0.9052 | 0.8814 | 0.9303 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_57_21 | 5a903832f0c9b3443ea96b727830fd711b7ff248 | 2022-03-01T13:58:54.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_57_21 | 5 | null | transformers | 16,917 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_57_21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_57_21
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5905
- Precision: 0.0024
- Recall: 0.0143
- F1: 0.0041
- Accuracy: 0.6867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.6081 | 0.0 | 0.0 | 0.0 | 0.6904 |
| No log | 2.0 | 20 | 0.6014 | 0.0025 | 0.0130 | 0.0042 | 0.6934 |
| No log | 3.0 | 30 | 0.5953 | 0.0 | 0.0 | 0.0 | 0.6930 |
| No log | 4.0 | 40 | 0.5858 | 0.0 | 0.0 | 0.0 | 0.6941 |
| No log | 5.0 | 50 | 0.5815 | 0.0 | 0.0 | 0.0 | 0.6947 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_36_04 | bf746b65a2f8b0ac1930444eca439343862fdd1c | 2022-03-01T14:39:23.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_36_04 | 5 | null | transformers | 16,918 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_36_04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_36_04
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2876
- Precision: 0.2345
- Recall: 0.4281
- F1: 0.3030
- Accuracy: 0.8728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3907 | 0.0433 | 0.0824 | 0.0568 | 0.7626 |
| No log | 2.0 | 60 | 0.3046 | 0.2302 | 0.4095 | 0.2947 | 0.8598 |
| No log | 3.0 | 90 | 0.2945 | 0.2084 | 0.4095 | 0.2762 | 0.8668 |
| No log | 4.0 | 120 | 0.2687 | 0.2847 | 0.4607 | 0.3519 | 0.8761 |
| No log | 5.0 | 150 | 0.2643 | 0.2779 | 0.4444 | 0.3420 | 0.8788 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/bert_base_uncased_itr0_0.0001_webDiscourse_01_03_2022-16_08_12 | 447e3f28d2c4318688f9bc30b589a9e31073472c | 2022-03-01T15:11:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ali2066 | null | ali2066/bert_base_uncased_itr0_0.0001_webDiscourse_01_03_2022-16_08_12 | 5 | null | transformers | 16,919 | Entry not found |
batterydata/batteryscibert-uncased-squad-v1 | c434942fe8c6b4f73715cffd77ea5af08ae9f734 | 2022-03-03T20:28:37.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"transformers",
"question answering",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | batterydata | null | batterydata/batteryscibert-uncased-squad-v1 | 5 | null | transformers | 16,920 | ---
language: en
tags: question answering
license: apache-2.0
datasets:
- squad
- batterydata/battery-device-data-qa
metrics: squad
---
# BatterySciBERT-uncased for QA
**Language model:** batteryscibert-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "batteryscibert-uncased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 79.81,
"f1": 87.66,
```
Evaluated on the battery device dataset.
```
"precision": 66.65,
"recall": 85.29,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-uncased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
batterydata/batterybert-cased-abstract | 7316b880b09f305e26a8e98f5e86d412b4b9d855 | 2022-03-05T14:54:39.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:batterydata/paper-abstracts",
"transformers",
"Text Classification",
"license:apache-2.0"
] | text-classification | false | batterydata | null | batterydata/batterybert-cased-abstract | 5 | null | transformers | 16,921 | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatteryBERT-cased for Battery Abstract Classification
**Language model:** batterybert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 11
base_LM_model = "batterybert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.29,
"Test accuracy": 96.85,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
batterydata/batteryscibert-cased-abstract | 3bf4862fa015bb25727d7cb9793064eb18e77141 | 2022-03-05T14:54:32.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:batterydata/paper-abstracts",
"transformers",
"Text Classification",
"license:apache-2.0"
] | text-classification | false | batterydata | null | batterydata/batteryscibert-cased-abstract | 5 | null | transformers | 16,922 | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatterySciBERT-cased for Battery Abstract Classification
**Language model:** batteryscibert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 11
base_LM_model = "batteryscibert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.06,
"Test accuracy": 97.19,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
batterydata/batteryonlybert-cased-abstract | 35fab45605285d77522d99fc1eab7d07be4d6aa2 | 2022-03-05T14:54:53.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:batterydata/paper-abstracts",
"transformers",
"Text Classification",
"license:apache-2.0"
] | text-classification | false | batterydata | null | batterydata/batteryonlybert-cased-abstract | 5 | null | transformers | 16,923 | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatteryOnlyBERT-cased for Battery Abstract Classification
**Language model:** batteryonlybert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 14
base_LM_model = "batteryonlybert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.33,
"Test accuracy": 97.34,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
batterydata/batteryonlybert-uncased-abstract | ab2a1b254413a35d634944e752344bcae38d28fa | 2022-03-05T14:53:56.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:batterydata/paper-abstracts",
"transformers",
"Text Classification",
"license:apache-2.0"
] | text-classification | false | batterydata | null | batterydata/batteryonlybert-uncased-abstract | 5 | null | transformers | 16,924 | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatteryOnlyBERT-uncased for Battery Abstract Classification
**Language model:** batteryonlybert-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 16
n_epochs = 13
base_LM_model = "batteryonlybert-uncased"
learning_rate = 3e-5
```
## Performance
```
"Validation accuracy": 97.18,
"Test accuracy": 97.08,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
armageddon/electra-base-squad2-covid-qa-deepset | 64d25cb635299915b3bb6d6f4c0f702a5bf3dcdc | 2022-03-02T06:38:05.000Z | [
"pytorch",
"tensorboard",
"electra",
"question-answering",
"dataset:covid_qa_deepset",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | armageddon | null | armageddon/electra-base-squad2-covid-qa-deepset | 5 | null | transformers | 16,925 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: electra-base-squad2-covid-qa-deepset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-squad2-covid-qa-deepset
This model is a fine-tuned version of [deepset/electra-base-squad2](https://huggingface.co/deepset/electra-base-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Cheatham/xlm-roberta-large-finetuned-r01 | b7c853a26475505eeaf1a2ef6b4b3bb0e7df3c12 | 2022-03-02T10:30:34.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned-r01 | 5 | null | transformers | 16,926 | Entry not found |
evs/distilbert-base-uncased-finetuned-emotion | 2b1eef0e539edc8c5559ab2209b2152e9097af33 | 2022-03-02T12:46:57.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | evs | null | evs/distilbert-base-uncased-finetuned-emotion | 5 | null | transformers | 16,927 | Entry not found |
Cheatham/xlm-roberta-large-finetuned-d1r01 | 772617dbe405bf288be5bbc9f2881559aa2c72b5 | 2022-03-02T13:37:04.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned-d1r01 | 5 | null | transformers | 16,928 | Entry not found |
lucasmtz/distilbert-base-uncased-finetuned-ner | b4aa38ba70b824d8b9bf8559617c13038a1f850e | 2022-03-02T15:56:12.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | lucasmtz | null | lucasmtz/distilbert-base-uncased-finetuned-ner | 5 | null | transformers | 16,929 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9252181597260577
- name: Recall
type: recall
value: 0.9370175634858485
- name: F1
type: f1
value: 0.9310804802134283
- name: Accuracy
type: accuracy
value: 0.9834146186474335
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0610
- Precision: 0.9252
- Recall: 0.9370
- F1: 0.9311
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.244 | 1.0 | 878 | 0.0714 | 0.9104 | 0.9181 | 0.9142 | 0.9797 |
| 0.0568 | 2.0 | 1756 | 0.0605 | 0.9183 | 0.9351 | 0.9266 | 0.9827 |
| 0.0302 | 3.0 | 2634 | 0.0610 | 0.9252 | 0.9370 | 0.9311 | 0.9834 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Akash7897/distilbert-base-uncased-finetuned-sst2 | 0f3e476bb26b0ed34c676b9db35ad06d5c1e5323 | 2022-03-03T08:57:39.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Akash7897 | null | Akash7897/distilbert-base-uncased-finetuned-sst2 | 5 | null | transformers | 16,930 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9036697247706422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3010
- Accuracy: 0.9037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1793 | 1.0 | 4210 | 0.3010 | 0.9037 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
batterydata/batteryonlybert-uncased | e675b6d643afd3cd7f3aa2f37e0cd124248e4a38 | 2022-03-05T16:03:58.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"dataset:batterypapers",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | batterydata | null | batterydata/batteryonlybert-uncased | 5 | null | transformers | 16,931 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- batterypapers
---
# BatteryOnlyBERT-cased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective. It was introduced in
[this paper](paper_link) and first released in
[this repository](https://github.com/ShuHuang/batterybert). This model is case-sensitive: it
makes a difference between english and English.
## Model description
BatteryOnlyBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatteryOnlyBERT model was pretrained on the full text of battery papers only. The paper corpus contains 1.87B tokens form a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at [Github](https://github.com/ShuHuang/batterybert/blob/main/corpus.txt).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 28,996. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,500,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=batterybert) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='batterydata/batteryonlybert-cased')
>>> unmasker("Hello I'm a <mask> model.")
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batteryonlybert-cased')
model = BertModel.from_pretrained('batterydata/batteryonlybert-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batteryonlybert-cased')
model = TFBertModel.from_pretrained('batterydata/batteryonlybert-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
Final loss: 1.0614.
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
daisyxie21/bert-base-uncased-8-200-0.01 | 7b67fb16b7a22c16c86549b2acd0e424a6591f67 | 2022-03-04T14:21:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | daisyxie21 | null | daisyxie21/bert-base-uncased-8-200-0.01 | 5 | null | transformers | 16,932 | Entry not found |
daisyxie21/bert-base-uncased-8-10-0.01 | c4494f588452be44ba13f5581221312585928b2f | 2022-03-04T16:27:40.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | daisyxie21 | null | daisyxie21/bert-base-uncased-8-10-0.01 | 5 | null | transformers | 16,933 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-8-10-0.01
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-8-10-0.01
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8324
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 400 | 0.8324 | 0.0 |
| 1.0904 | 2.0 | 800 | 1.3157 | 0.0 |
| 0.9461 | 3.0 | 1200 | 0.4407 | 0.0 |
| 0.9565 | 4.0 | 1600 | 2.1082 | 0.0 |
| 1.024 | 5.0 | 2000 | 0.7220 | 0.0 |
| 1.024 | 6.0 | 2400 | 0.7414 | 0.0 |
| 0.8362 | 7.0 | 2800 | 0.4442 | 0.0 |
| 0.6765 | 8.0 | 3200 | 0.5481 | 0.0 |
| 0.5902 | 9.0 | 3600 | 0.5642 | 0.0 |
| 0.5476 | 10.0 | 4000 | 0.4449 | 0.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
crabz/distil-slovakbert | bf7ccaca15902d4cc2fc93e5991dd9ccd6f9eb73 | 2022-03-06T12:30:11.000Z | [
"pytorch",
"roberta",
"fill-mask",
"sk",
"dataset:c4-sk",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | crabz | null | crabz/distil-slovakbert | 5 | null | transformers | 16,934 | ---
language: sk
license: mit
tags:
- fill-mask
- roberta
datasets:
- c4-sk
inference: false
---
|
DrishtiSharma/distilbert-base-uncased-finetuned-emotion | fe3eb73a0d54f7d79b66549500e4037e8be2754b | 2022-03-05T06:20:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | DrishtiSharma | null | DrishtiSharma/distilbert-base-uncased-finetuned-emotion | 5 | null | transformers | 16,935 | Entry not found |
jonghyuk/LJP | 94323916202ddcea4b0be236efef057bceaa76c7 | 2022-03-10T05:00:05.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | jonghyuk | null | jonghyuk/LJP | 5 | 1 | transformers | 16,936 | Entry not found |
anjandash/finetuned-bert-java-cmpx-v1 | 8b2aab3dfdf17df37a9724942d5c64410aef156f | 2022-03-07T12:19:40.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"java",
"dataset:giganticode/java-cmpx-v1",
"transformers",
"license:mit"
] | text-classification | false | anjandash | null | anjandash/finetuned-bert-java-cmpx-v1 | 5 | null | transformers | 16,937 | ---
language:
- java
license: mit
datasets:
- giganticode/java-cmpx-v1
--- |
Anthos23/FS-finbert-fine-tuned-f1 | c89eafc9ef0e473b992a631ff579cadb01a686aa | 2022-03-07T16:12:42.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Anthos23 | null | Anthos23/FS-finbert-fine-tuned-f1 | 5 | null | transformers | 16,938 | Entry not found |
SuperAI2-Machima/mt5-small-translation_thai-english | 0cd39ca186940c639791daf1430d73b0483b6637 | 2022-03-08T01:37:11.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SuperAI2-Machima | null | SuperAI2-Machima/mt5-small-translation_thai-english | 5 | null | transformers | 16,939 | Entry not found |
aaraki/distilbert-base-uncased-finetuned-cola | 1c70c1b8645681d3c68d6e0b9240fd2e1b74acfd | 2022-03-09T02:08:47.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | aaraki | null | aaraki/distilbert-base-uncased-finetuned-cola | 5 | null | transformers | 16,940 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.40967417350821667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5026
- Matthews Correlation: 0.4097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5335 | 1.0 | 535 | 0.5026 | 0.4097 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
ctoraman/RoBERTa-TR-medium-wp-28k | f2db5487b399ef828b6d0826c1539625d7f6d2c9 | 2022-04-20T07:01:13.000Z | [
"pytorch",
"roberta",
"fill-mask",
"tr",
"dataset:oscar",
"arxiv:2204.08832",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ctoraman | null | ctoraman/RoBERTa-TR-medium-wp-28k | 5 | null | transformers | 16,941 | ---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---
# RoBERTa Turkish medium WordPiece 28k (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is WordPiece. Vocabulary size is 28.6k.
The details and performance comparisons can be found at this paper:
https://arxiv.org/abs/2204.08832
The following code can be used for model loading and tokenization, example max length (514) can be changed:
```
model = AutoModel.from_pretrained([model_path])
#for sequence classification:
#model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path])
tokenizer.mask_token = "[MASK]"
tokenizer.cls_token = "[CLS]"
tokenizer.sep_token = "[SEP]"
tokenizer.pad_token = "[PAD]"
tokenizer.unk_token = "[UNK]"
tokenizer.bos_token = "[CLS]"
tokenizer.eos_token = "[SEP]"
tokenizer.model_max_length = 514
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2204.08832,
doi = {10.48550/ARXIV.2204.08832},
url = {https://arxiv.org/abs/2204.08832},
author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Impact of Tokenization on Language Models: An Analysis for Turkish},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
|
ArnavL/twteval-pretrained | 3bfcc098686d2ad3781678f6b1fea6fbffa5093e | 2022-03-10T04:52:52.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | ArnavL | null | ArnavL/twteval-pretrained | 5 | null | transformers | 16,942 | ---
license: mit
---
# Pretrained Model
BASE MODEL : BERT-BASE-UNCASED
DATASET : [TWTEVAL SENTIMENT](https://huggingface.co/datasets/ArnavL/TWTEval-Pretraining-Processed)
|
amanm27/bert-base-uncased-wiki | b8618da1fb9b25fcb4a28fc99ffe3075848d2089 | 2022-03-10T06:15:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | amanm27 | null | amanm27/bert-base-uncased-wiki | 5 | null | transformers | 16,943 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-wiki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-wiki
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9294 | 1.0 | 2319 | 1.7732 |
| 1.8219 | 2.0 | 4638 | 1.7363 |
| 1.7957 | 3.0 | 6957 | 1.7454 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Yangdf/mt5-base-chinese-qg | 030427d42fd45048eb3b3ecdd76382d911038cf9 | 2022-06-14T06:05:26.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Yangdf | null | Yangdf/mt5-base-chinese-qg | 5 | null | transformers | 16,944 | Entry not found |
kazandaev/mt5-base-en-ru | 4cbf141a45169f204019a8bb70dc1f0a90e47de9 | 2022-03-21T19:31:50.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | kazandaev | null | kazandaev/mt5-base-en-ru | 5 | null | transformers | 16,945 | ---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-base-en-ru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-en-ru
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7194
- Bleu: 14.3528
- Gen Len: 17.8655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.5319 | 1.0 | 9641 | 0.8010 | 14.0075 | 17.8566 |
| 0.5903 | 2.0 | 19282 | 0.7652 | 14.268 | 17.8691 |
| 0.6942 | 3.0 | 28923 | 0.7194 | 14.3528 | 17.8655 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Sarahliu186/distilbert-base-uncased-finetuned-cola | 520b7778ca93b55a7d10eb28423cdcb18f320316 | 2022-03-10T20:47:06.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Sarahliu186 | null | Sarahliu186/distilbert-base-uncased-finetuned-cola | 5 | null | transformers | 16,946 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.548847644400088
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7415
- Matthews Correlation: 0.5488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5273 | 1.0 | 535 | 0.5063 | 0.4092 |
| 0.3491 | 2.0 | 1070 | 0.4956 | 0.5259 |
| 0.2352 | 3.0 | 1605 | 0.6045 | 0.5301 |
| 0.1737 | 4.0 | 2140 | 0.7415 | 0.5488 |
| 0.1264 | 5.0 | 2675 | 0.8459 | 0.5466 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
sanchit-gandhi/wav2vec2-2-bert-large-long-run | e56a48c1e52b868a64a4e86902aa3f753efb7aa2 | 2022-03-12T06:47:06.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-bert-large-long-run | 5 | null | transformers | 16,947 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 12.7395
- Wer: 2.0272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.6781 | 1.68 | 1500 | 6.6386 | 1.1672 |
| 6.6836 | 3.36 | 3000 | 6.6587 | 1.9518 |
| 6.6622 | 5.04 | 4500 | 6.5888 | 1.9276 |
| 5.844 | 6.73 | 6000 | 6.7220 | 1.9423 |
| 6.4588 | 8.41 | 7500 | 7.7569 | 1.9964 |
| 6.4097 | 10.09 | 9000 | 9.2515 | 2.0168 |
| 6.2676 | 11.77 | 10500 | 9.8159 | 2.0179 |
| 6.4948 | 13.45 | 12000 | 10.7091 | 2.0223 |
| 6.2728 | 15.13 | 13500 | 11.7747 | 2.0255 |
| 6.319 | 16.82 | 15000 | 12.2084 | 2.0259 |
| 5.8157 | 18.5 | 16500 | 12.7395 | 2.0272 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-Concat_CRAFT_es | 1fcde623c6418569ed81c5cac7a0d0edad63c1fe | 2022-03-11T18:47:48.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | StivenLancheros | null | StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-Concat_CRAFT_es | 5 | null | transformers | 16,948 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-ner-Concat_CRAFT_es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-Concat_CRAFT_es
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1874
- Precision: 0.8559
- Recall: 0.8425
- F1: 0.8492
- Accuracy: 0.9696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.072 | 1.0 | 2719 | 0.1500 | 0.8138 | 0.8224 | 0.8181 | 0.9644 |
| 0.0305 | 2.0 | 5438 | 0.1555 | 0.8417 | 0.8253 | 0.8334 | 0.9674 |
| 0.014 | 3.0 | 8157 | 0.1743 | 0.8429 | 0.8412 | 0.8421 | 0.9685 |
| 0.0076 | 4.0 | 10876 | 0.1874 | 0.8559 | 0.8425 | 0.8492 | 0.9696 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
IsaacBot/t5-small-finetuned-mfaqs-en | 259ceda9588be6810e98edcd29cc4139d5f59166 | 2022-03-11T14:18:42.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:mfaq",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | IsaacBot | null | IsaacBot/t5-small-finetuned-mfaqs-en | 5 | null | transformers | 16,949 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mfaq
model-index:
- name: t5-small-finetuned-mfaqs-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-mfaqs-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the mfaq dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
jfealko/wav2vec2-large-xls-r-300m-irish-custom-data | d05abee9624f7ae4ff5b43c9b08d08e2432e9b24 | 2022-03-11T20:16:54.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | jfealko | null | jfealko/wav2vec2-large-xls-r-300m-irish-custom-data | 5 | null | transformers | 16,950 | Entry not found |
anton-l/xtreme_s_xlsr_minds14 | 97f45602b9d5267e3ac469f6744dd92f4b7f9783 | 2022-03-14T10:58:42.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | anton-l | null | anton-l/xtreme_s_xlsr_minds14 | 5 | null | transformers | 16,951 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xtreme_s_xlsr_minds14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_minds14
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2566
- F1: {'f1': 0.9460569664921582, 'accuracy': 0.9468540012217471}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------:|
| 2.551 | 2.7 | 200 | 2.5921 | {'f1': 0.03454307545755678, 'accuracy': 0.1148442272449603} |
| 1.6934 | 5.41 | 400 | 1.5353 | {'f1': 0.5831241711045994, 'accuracy': 0.6053756872327428} |
| 0.5914 | 8.11 | 600 | 0.7337 | {'f1': 0.7990425247664236, 'accuracy': 0.7947464874770922} |
| 0.3896 | 10.81 | 800 | 0.5076 | {'f1': 0.8738199236080776, 'accuracy': 0.872327428222358} |
| 0.5052 | 13.51 | 1000 | 0.4917 | {'f1': 0.8744760456867134, 'accuracy': 0.8747709224190593} |
| 0.4806 | 16.22 | 1200 | 0.4751 | {'f1': 0.8840798740258787, 'accuracy': 0.8845448992058644} |
| 0.2103 | 18.92 | 1400 | 0.5228 | {'f1': 0.8721632556623751, 'accuracy': 0.8729383017715333} |
| 0.4198 | 21.62 | 1600 | 0.5910 | {'f1': 0.8755207264572983, 'accuracy': 0.8766035430665852} |
| 0.11 | 24.32 | 1800 | 0.4464 | {'f1': 0.896423086249818, 'accuracy': 0.8955406230910201} |
| 0.1233 | 27.03 | 2000 | 0.3760 | {'f1': 0.9012283567348968, 'accuracy': 0.9016493585827734} |
| 0.1827 | 29.73 | 2200 | 0.4178 | {'f1': 0.9042381720184095, 'accuracy': 0.9059254734270006} |
| 0.1235 | 32.43 | 2400 | 0.4152 | {'f1': 0.9063257163259107, 'accuracy': 0.9071472205253512} |
| 0.1873 | 35.14 | 2600 | 0.2903 | {'f1': 0.9369340598806323, 'accuracy': 0.9376908979841173} |
| 0.017 | 37.84 | 2800 | 0.3046 | {'f1': 0.9300781160576355, 'accuracy': 0.9303604153940135} |
| 0.0436 | 40.54 | 3000 | 0.3111 | {'f1': 0.9315034391389341, 'accuracy': 0.9321930360415394} |
| 0.0455 | 43.24 | 3200 | 0.2748 | {'f1': 0.9417365311433034, 'accuracy': 0.9425778863775198} |
| 0.046 | 45.95 | 3400 | 0.2800 | {'f1': 0.9390712658440112, 'accuracy': 0.9395235186316433} |
| 0.0042 | 48.65 | 3600 | 0.2566 | {'f1': 0.9460569664921582, 'accuracy': 0.9468540012217471} |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
anwesham/indicbert_hi_ur | f0d4a1286a8fcfc2af5fb363092c9dbbe3b16401 | 2022-03-13T02:51:04.000Z | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | anwesham | null | anwesham/indicbert_hi_ur | 5 | null | transformers | 16,952 | Entry not found |
GPL/dbpedia-entity-distilbert-tas-b-gpl-self_miner | 27dc9c694c8307bcee5a42e9aa4f7d8f3e417909 | 2022-03-14T14:23:21.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/dbpedia-entity-distilbert-tas-b-gpl-self_miner | 5 | null | sentence-transformers | 16,953 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
cambridgeltl/guardian_news_distilbert-base-uncased | 35332e79269692b1cf536a172abbfb4330054d01 | 2022-03-14T15:47:45.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | cambridgeltl | null | cambridgeltl/guardian_news_distilbert-base-uncased | 5 | null | transformers | 16,954 | Entry not found |
Simply-divine/finetune_indian_asr | c79091f0adb512172b65a1f3c57c28127a65ed30 | 2022-03-15T22:57:29.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | Simply-divine | null | Simply-divine/finetune_indian_asr | 5 | 1 | transformers | 16,955 | ---
tags:
- generated_from_trainer
model-index:
- name: finetune_indian_asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_indian_asr
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-indian-english-enm-700](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-indian-english-enm-700) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4215
- Wer: 0.3403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0566 | 3.45 | 500 | 2.9944 | 1.0 |
| 2.7241 | 6.9 | 1000 | 1.4455 | 0.7654 |
| 0.9755 | 10.34 | 1500 | 0.4299 | 0.4034 |
| 0.4624 | 13.79 | 2000 | 0.3628 | 0.3297 |
| 0.3158 | 17.24 | 2500 | 0.3835 | 0.2952 |
| 0.2604 | 20.69 | 3000 | 0.3802 | 0.2877 |
| 0.2 | 24.14 | 3500 | 0.3842 | 0.2799 |
| 1.7441 | 27.59 | 4000 | 0.4215 | 0.3403 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
GleamEyeBeast/ASCEND_Dataset_Model | 048727f41ca27be9533ea7d796a7395c330ea3aa | 2022-03-16T22:58:29.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | GleamEyeBeast | null | GleamEyeBeast/ASCEND_Dataset_Model | 5 | null | transformers | 16,956 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: ASCEND_Dataset_Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASCEND_Dataset_Model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9199
- Wer: 0.9540
- Cer: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 16.9063 | 1.0 | 687 | 4.7768 | 1.0 | 1.0 |
| 5.0252 | 2.0 | 1374 | 4.7004 | 1.0 | 1.0 |
| 4.9378 | 3.0 | 2061 | 4.6715 | 1.0 | 1.0 |
| 5.1468 | 4.0 | 2748 | 4.6605 | 1.0 | 1.0 |
| 4.9353 | 5.0 | 3435 | 4.6470 | 1.0 | 1.0 |
| 4.913 | 6.0 | 4122 | 4.6177 | 1.0 | 1.0 |
| 4.8034 | 7.0 | 4809 | 4.7699 | 1.0 | 1.0 |
| 4.6905 | 8.0 | 5496 | 4.3596 | 1.0 | 1.0 |
| 4.5251 | 9.0 | 6183 | 4.2670 | 1.0 | 1.0 |
| 4.4527 | 10.0 | 6870 | 4.2087 | 1.0 | 1.0 |
| 4.3731 | 11.0 | 7557 | 4.1950 | 0.9982 | 0.9997 |
| 4.3461 | 12.0 | 8244 | 4.2287 | 0.9928 | 0.9988 |
| 4.3224 | 13.0 | 8931 | 4.1565 | 0.9802 | 0.9971 |
| 4.2504 | 14.0 | 9618 | 4.1254 | 0.9619 | 0.9937 |
| 4.2196 | 15.0 | 10305 | 4.0377 | 0.9562 | 0.9913 |
| 4.1911 | 16.0 | 10992 | 4.0576 | 0.9601 | 0.9887 |
| 4.1079 | 17.0 | 11679 | 4.0630 | 0.9544 | 0.9857 |
| 4.1117 | 18.0 | 12366 | 4.0009 | 0.9558 | 0.9880 |
| 4.0324 | 19.0 | 13053 | 3.9245 | 0.9540 | 0.9877 |
| 3.9871 | 20.0 | 13740 | 3.9199 | 0.9540 | 0.9868 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ScandinavianMrT/gpt2_supervised_SARC_3epochs_withcontext | 0cca0d1b5aee0ce4fa577eef9ced7f3c3103df07 | 2022-03-15T17:08:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | ScandinavianMrT | null | ScandinavianMrT/gpt2_supervised_SARC_3epochs_withcontext | 5 | null | transformers | 16,957 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2_supervised_SARC_3epochs_withcontext
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_supervised_SARC_3epochs_withcontext
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.185 | 1.0 | 16989 | 3.1178 |
| 3.1342 | 2.0 | 33978 | 3.1008 |
| 3.1062 | 3.0 | 50967 | 3.0949 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
bitsanlp/Homophobia-Transphobia-v2-mBERT-EDA | 217eec3a25cbc122d372c5f801177e46bc731a13 | 2022-03-15T17:31:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | bitsanlp | null | bitsanlp/Homophobia-Transphobia-v2-mBERT-EDA | 5 | null | transformers | 16,958 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Homophobia-Transphobia-v2-mBERT-EDA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Homophobia-Transphobia-v2-mBERT-EDA
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5401
- Accuracy: 0.9317
- F1: 0.4498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1699 | 1.0 | 189 | 0.4125 | 0.9229 | 0.4634 |
| 0.0387 | 2.0 | 378 | 0.4658 | 0.9229 | 0.3689 |
| 0.0148 | 3.0 | 567 | 0.5250 | 0.9355 | 0.4376 |
| 0.0005 | 4.0 | 756 | 0.5336 | 0.9317 | 0.4531 |
| 0.0016 | 5.0 | 945 | 0.5401 | 0.9317 | 0.4498 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
csclarke/MARS-Encoder | 95b6f74bd5f787d56ec7b0bc3b7397fdda9023af | 2022-03-16T00:36:53.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:cc"
] | text-classification | false | csclarke | null | csclarke/MARS-Encoder | 5 | null | transformers | 16,959 | ---
license: cc
---
# MARS Encoder for Multi-agent Response Selection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class and is the model used in the paper [One Agent To Rule Them All: Towards Multi-agent Conversational AI](https://csclarke.com/assets/pdf/ACL_2022.pdf).
## Training Data
This model was trained on the [BBAI dataset](https://github.com/ChrisIsKing/black-box-multi-agent-integation/tree/main/data). The model will predict a score between 0 and 1 ranking the correctness of a response to a user question from a conversational agent.
## Usage and Performance
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('csclarke/MARS-Encoder')
scores = model.predict([('question 1', 'response 1'), ('question 1', 'response 2')])
```
The model will predict scores for the pairs `('question 1', 'response 1')` and `('question 1', 'response 2')`.
You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
|
clapika2010/rayyan_predictions | 8eef0be48a59cfd53ef62e88d40a365c62ba77ba | 2022-03-16T06:23:43.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | clapika2010 | null | clapika2010/rayyan_predictions | 5 | null | transformers | 16,960 | Entry not found |
PSW/speaker-change-bart-samsum | 7b18a696658d8ca05f932db8e1a6c8abb5ef44d2 | 2022-03-16T01:34:31.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/speaker-change-bart-samsum | 5 | null | transformers | 16,961 | Entry not found |
aws-ai/vascl-roberta-base | 50dff1759d2d12e6e3eec09f1e9f50a6ab56928b | 2022-03-16T04:22:10.000Z | [
"pytorch",
"roberta",
"transformers",
"license:apache-2.0"
] | null | false | aws-ai | null | aws-ai/vascl-roberta-base | 5 | null | transformers | 16,962 | ---
license: apache-2.0
---
|
cambridgeltl/guardian_news_electra_small | c5fd8b2fe2cebbf70c9f3178e1409397bab5a216 | 2022-03-16T10:32:03.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | cambridgeltl | null | cambridgeltl/guardian_news_electra_small | 5 | null | transformers | 16,963 | Entry not found |
anton-l/xtreme_s_xlsr_minds14_upd | c204f31b6c99dce29eb278ccebbcc78cb8d5378c | 2022-03-16T11:52:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:xtreme_s",
"transformers",
"minds14",
"google/xtreme_s",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | anton-l | null | anton-l/xtreme_s_xlsr_minds14_upd | 5 | null | transformers | 16,964 | ---
license: apache-2.0
tags:
- minds14
- google/xtreme_s
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- f1
- accuracy
model-index:
- name: xtreme_s_xlsr_minds14_upd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_minds14_upd
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14.FR-FR dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6303
- F1: 0.0223
- Accuracy: 0.0833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
Rustem/roberta-base-trained-50k-docs | 579bbb1d54fd620949f64238dc23b54f2a4462f6 | 2022-03-16T12:38:46.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | Rustem | null | Rustem/roberta-base-trained-50k-docs | 5 | null | transformers | 16,965 | ---
license: apache-2.0
---
|
ScandinavianMrT/distilbert-IMDB-POS | 6a1810c20a91f42e3c1abb62c1e4d3a50b7210d4 | 2022-03-16T18:15:20.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ScandinavianMrT | null | ScandinavianMrT/distilbert-IMDB-POS | 5 | null | transformers | 16,966 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-IMDB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-IMDB
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1905
- Accuracy: 0.9295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1928 | 1.0 | 2000 | 0.1905 | 0.9295 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ScandinavianMrT/distilbert-SARC_withcontext_3.0 | cf97125be65f960ad30b30144c35b6a36b8ec9e5 | 2022-03-16T20:03:20.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ScandinavianMrT | null | ScandinavianMrT/distilbert-SARC_withcontext_3.0 | 5 | null | transformers | 16,967 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-SARC_withcontext_3.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-SARC_withcontext_3.0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
KoichiYasuoka/roberta-small-belarusian | 55acabfaccc375eb04e38cd89efeff44ec66a5ca | 2022-03-17T07:58:19.000Z | [
"pytorch",
"roberta",
"fill-mask",
"be",
"dataset:cc100",
"transformers",
"belarusian",
"masked-lm",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/roberta-small-belarusian | 5 | null | transformers | 16,968 | ---
language:
- "be"
tags:
- "belarusian"
- "masked-lm"
license: "cc-by-sa-4.0"
datasets:
- "cc100"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
---
# roberta-small-belarusian
## Model Description
This is a RoBERTa model pre-trained on [CC-100](https://data.statmt.org/cc-100/). You can fine-tune `roberta-small-belarusian` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-small-belarusian-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-belarusian")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-small-belarusian")
```
|
cambridgeltl/guardian_news_electra_base | 103d92e5752b2caba814bf5b9bc879e5b7c74d1b | 2022-03-17T09:34:31.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | cambridgeltl | null | cambridgeltl/guardian_news_electra_base | 5 | null | transformers | 16,969 | Entry not found |
taehyunzzz/distilbert-base-uncased-finetuned-ner | 141e4069dc9c6136a01e3e12b81f0159d9edbde5 | 2022-03-17T10:46:16.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | taehyunzzz | null | taehyunzzz/distilbert-base-uncased-finetuned-ner | 5 | null | transformers | 16,970 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9032328767123288
- name: Recall
type: recall
value: 0.9220270723794608
- name: F1
type: f1
value: 0.912533215234721
- name: Accuracy
type: accuracy
value: 0.979951387675346
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0722
- Precision: 0.9032
- Recall: 0.9220
- F1: 0.9125
- Accuracy: 0.9800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 220 | 0.0974 | 0.8663 | 0.8865 | 0.8763 | 0.9735 |
| No log | 2.0 | 440 | 0.0754 | 0.8947 | 0.9176 | 0.9060 | 0.9790 |
| 0.1921 | 3.0 | 660 | 0.0722 | 0.9032 | 0.9220 | 0.9125 | 0.9800 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 2.0.0
- Tokenizers 0.11.6
|
facebook/regnet-y-008 | 8afb013500166812b7b3fcdc04f75062fc3a6894 | 2022-06-30T10:21:48.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/regnet-y-008 | 5 | null | transformers | 16,971 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
ScandinavianMrT/distilbert-IMDB-NEG | 6c6fad72459086ad9bfd923a18079f6035e640dd | 2022-03-18T16:43:11.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ScandinavianMrT | null | ScandinavianMrT/distilbert-IMDB-NEG | 5 | null | transformers | 16,972 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-IMDB-NEG
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-IMDB-NEG
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1871
- Accuracy: 0.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1865 | 1.0 | 2000 | 0.1871 | 0.9346 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Rustem/roberta-base-best | 4658858b2f4de6e3150177644200c21b490014db | 2022-03-18T23:14:57.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | Rustem | null | Rustem/roberta-base-best | 5 | null | transformers | 16,973 | Entry not found |
ShengdingHu/CAPITALIZE_T5-LowRankAdapter | 136570061af98f96e305d7cc3212062e5158fa03 | 2022-03-19T17:41:42.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/CAPITALIZE_T5-LowRankAdapter | 5 | null | transformers | 16,974 | Entry not found |
ShengdingHu/Capitalize_T5-LoRA | 60cf82b08f5ad24d03d1cb39489e81f2939f3af0 | 2022-03-19T18:48:58.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/Capitalize_T5-LoRA | 5 | null | transformers | 16,975 | Entry not found |
Ketzu/koelectra-sts-v0.6 | a6fa4782965a009e8281049ae5a01259477615a5 | 2022-03-22T13:18:11.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | Ketzu | null | Ketzu/koelectra-sts-v0.6 | 5 | null | transformers | 16,976 | ---
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: koelectra-sts-v0.6
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8698381401893762
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# koelectra-sts-v0.6
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0059
- Pearson: 0.9988
- Spearmanr: 0.8698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:---------:|
| 0.0036 | 1.0 | 6250 | 0.0082 | 0.9983 | 0.8698 |
| 0.0038 | 2.0 | 12500 | 0.0065 | 0.9986 | 0.8697 |
| 0.0105 | 3.0 | 18750 | 0.0071 | 0.9985 | 0.8698 |
| 0.0008 | 4.0 | 25000 | 0.0059 | 0.9988 | 0.8698 |
| 0.0008 | 5.0 | 31250 | 0.0059 | 0.9988 | 0.8698 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
beston91/gpt2-xl_ft_logits_10k | 77dcd18c485e3bbb76a35f930e9666381da838c7 | 2022-03-24T05:04:35.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | beston91 | null | beston91/gpt2-xl_ft_logits_10k | 5 | null | transformers | 16,977 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_logits_10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_logits_10k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.3791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 54 | 6.1576 |
| No log | 1.99 | 108 | 6.2663 |
| No log | 2.99 | 162 | 6.3520 |
| No log | 3.99 | 216 | 6.3791 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
sanchit-gandhi/wav2vec2-2-roberta-no-adapter-regularisation | 8c0ea7fedc5b488b32246c81be831c18c4bab6c2 | 2022-03-22T09:45:38.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-roberta-no-adapter-regularisation | 5 | null | transformers | 16,978 | Entry not found |
claytonsamples/distilbert-base-uncased-finetuned-emotion | d11d27f570b12c4cdbd0db0dfa9125b8c24c2498 | 2022-03-21T03:56:58.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | claytonsamples | null | claytonsamples/distilbert-base-uncased-finetuned-emotion | 5 | null | transformers | 16,979 | Entry not found |
cammy/PRIMERA-100-MDS-own2 | 5b347294d13904bcfab0d6d2a0b265524c5543e2 | 2022-03-21T04:41:09.000Z | [
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/PRIMERA-100-MDS-own2 | 5 | null | transformers | 16,980 | Entry not found |
ScandinavianMrT/distilbert_ONION_1epoch | f0f57a61a31cd1ff0a370b6c0489ab021516af38 | 2022-03-21T15:06:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | ScandinavianMrT | null | ScandinavianMrT/distilbert_ONION_1epoch | 5 | null | transformers | 16,981 | Entry not found |
mimicheng/codeparrot-ds | e51b27e27ea9b28ea51a99709c9256744023bf0c | 2022-03-22T03:45:36.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | mimicheng | null | mimicheng/codeparrot-ds | 5 | null | transformers | 16,982 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.7397
- eval_runtime: 603.8598
- eval_samples_per_second: 154.281
- eval_steps_per_second: 4.822
- epoch: 0.08
- step: 5000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES | 5a2bddd46579f735c53982ca1f48ea02f4a51dd7 | 2022-03-21T22:25:59.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | StivenLancheros | null | StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES | 5 | null | transformers | 16,983 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2224
- Precision: 0.8298
- Recall: 0.8306
- F1: 0.8302
- Accuracy: 0.9659
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Three datasets (original, augmented, MT translated CRAFT) were concatenated.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0624 | 1.0 | 4078 | 0.1844 | 0.8002 | 0.7923 | 0.7963 | 0.9607 |
| 0.0284 | 2.0 | 8156 | 0.1937 | 0.8394 | 0.7988 | 0.8186 | 0.9637 |
| 0.0118 | 3.0 | 12234 | 0.2007 | 0.8285 | 0.8232 | 0.8258 | 0.9649 |
| 0.0043 | 4.0 | 16312 | 0.2224 | 0.8298 | 0.8306 | 0.8302 | 0.9659 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
danyaljj/gpt-j-6B-step-383500 | 2e05b8303ea9490a8f9de37df763d34d3ce424e7 | 2022-03-22T23:12:19.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | danyaljj | null | danyaljj/gpt-j-6B-step-383500 | 5 | null | transformers | 16,984 | Entry not found |
edmz/distilbert-base-uncased-finetuned-ner | 883087cba003e84f215451c8ad32a2f56f37c67f | 2022-03-22T09:56:14.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | edmz | null | edmz/distilbert-base-uncased-finetuned-ner | 5 | null | transformers | 16,985 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9247134038800705
- name: Recall
type: recall
value: 0.9384718648618414
- name: F1
type: f1
value: 0.9315418355449449
- name: Accuracy
type: accuracy
value: 0.9836529143565221
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
- Precision: 0.9247
- Recall: 0.9385
- F1: 0.9315
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2421 | 1.0 | 878 | 0.0701 | 0.9083 | 0.9217 | 0.9149 | 0.9801 |
| 0.0555 | 2.0 | 1756 | 0.0599 | 0.9204 | 0.9357 | 0.9280 | 0.9830 |
| 0.0311 | 3.0 | 2634 | 0.0612 | 0.9247 | 0.9385 | 0.9315 | 0.9837 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
PSW/ut_del_three_per_each_ver1 | 914ba1d439aefed653f4171a1d03c5b5adc98057 | 2022-03-22T14:26:01.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_three_per_each_ver1 | 5 | null | transformers | 16,986 | Entry not found |
vinaykudari/t5-acled-t2s | 9b7597e67017095f19f02f915312444b1a8dd32b | 2022-05-09T14:54:42.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vinaykudari | null | vinaykudari/t5-acled-t2s | 5 | null | transformers | 16,987 | Entry not found |
gayanin/bart-med-term-conditional-masking | cec6198b3d2fd17f1416dc3236e26020bd17aa61 | 2022-03-23T19:06:03.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | gayanin | null | gayanin/bart-med-term-conditional-masking | 5 | null | transformers | 16,988 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-med-term-conditional-masking
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-med-term-conditional-masking
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5115
- Rouge2 Precision: 0.7409
- Rouge2 Recall: 0.5343
- Rouge2 Fmeasure: 0.6025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.6278 | 1.0 | 15827 | 0.5546 | 0.7255 | 0.5244 | 0.5908 |
| 0.5356 | 2.0 | 31654 | 0.5286 | 0.7333 | 0.5293 | 0.5966 |
| 0.4757 | 3.0 | 47481 | 0.5154 | 0.7376 | 0.532 | 0.5998 |
| 0.4337 | 4.0 | 63308 | 0.5107 | 0.7406 | 0.5342 | 0.6023 |
| 0.4045 | 5.0 | 79135 | 0.5115 | 0.7409 | 0.5343 | 0.6025 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ScandinavianMrT/distilbert_ONION_3epoch | 4dfa0918608ee174b2da35c7d4b41f444dce42b7 | 2022-03-23T15:02:16.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | ScandinavianMrT | null | ScandinavianMrT/distilbert_ONION_3epoch | 5 | null | transformers | 16,989 | Entry not found |
Zohar/distilgpt2-finetuned-hotel-reviews | 45046a43f7dc9dac8c4f4addd8fdb14a8ca6ea1e | 2022-03-23T18:42:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | Zohar | null | Zohar/distilgpt2-finetuned-hotel-reviews | 5 | null | transformers | 16,990 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-hotel-reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-hotel-reviews
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7533 | 1.0 | 1259 | 3.6803 |
| 3.6644 | 2.0 | 2518 | 3.6366 |
| 3.6426 | 3.0 | 3777 | 3.6253 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.11.0
|
ScandinavianMrT/distilbert_ONION_1epoch_2.0 | a42ec13b704de443c4c31cf984aec8a295059aba | 2022-03-23T18:30:14.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | ScandinavianMrT | null | ScandinavianMrT/distilbert_ONION_1epoch_2.0 | 5 | null | transformers | 16,991 | Entry not found |
huggingtweets/radagasttbrown | 29ef5189032f5ce62b0ccb9df7fa1d500bc6c0f5 | 2022-03-23T21:33:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/radagasttbrown | 5 | null | transformers | 16,992 | ---
language: en
thumbnail: http://www.huggingtweets.com/radagasttbrown/1648071147429/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1362404255798280192/yIKMf5AN_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Radagast 🌋</div>
<div style="text-align: center; font-size: 14px;">@radagasttbrown</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Radagast 🌋.
| Data | Radagast 🌋 |
| --- | --- |
| Tweets downloaded | 3228 |
| Retweets | 457 |
| Short tweets | 230 |
| Tweets kept | 2541 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1b1t67ko/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @radagasttbrown's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/boipgvkp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/boipgvkp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/radagasttbrown')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
yy642/bert-base-uncased-finetuned-mnli-max-length-32-epoch-1 | b84b147a8513514adf169cbc0e48330e2affb216 | 2022-03-23T22:33:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | yy642 | null | yy642/bert-base-uncased-finetuned-mnli-max-length-32-epoch-1 | 5 | 1 | transformers | 16,993 | Entry not found |
radev/pegasus-samsum | 6dfeef4cc0c82b837ab787b899b5575c74e1d269 | 2022-07-04T15:38:01.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | radev | null | radev/pegasus-samsum | 5 | null | transformers | 16,994 | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ademarcarneiro/distilbert-base-uncased-finetuned-emotion | fc2d62513c3616837266ee1e3aa926d8ed0fc24d | 2022-03-24T03:20:32.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | ademarcarneiro | null | ademarcarneiro/distilbert-base-uncased-finetuned-emotion | 5 | null | transformers | 16,995 | Entry not found |
Helsinki-NLP/opus-mt-tc-base-uk-fi | 829baaf04fcd60145ec90e7f6daebd99b12d4d68 | 2022-06-01T13:10:14.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"uk",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-base-uk-fi | 5 | null | transformers | 16,996 | ---
language:
- fi
- uk
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-base-uk-fi
results:
- task:
name: Translation ukr-fin
type: translation
args: ukr-fin
dataset:
name: flores101-devtest
type: flores_101
args: ukr fin devtest
metrics:
- name: BLEU
type: bleu
value: 19.6
---
# opus-mt-tc-base-uk-fi
Neural machine translation model for translating from Ukrainian (uk) to Finnish (fi).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-17
* source language(s): ukr
* target language(s): fin
* model: transformer-align
* data: opusTCv20210807+pft+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+pft+pbt_transformer-align_2022-03-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-fin/opusTCv20210807+pft+pbt_transformer-align_2022-03-17.zip)
* more information released models: [OPUS-MT ukr-fin README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-fin/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Африка є колискою людства.",
"Один, два, три, чотири, п'ять, шість, сім, вісім, дев'ять, десять."
]
model_name = "pytorch-models/opus-mt-tc-base-uk-fi"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Afrikka on ihmiskunnan kehto.
# Yksi, kaksi, kolme, neljä, viisi, kuusi, seitsemän, kahdeksan, yhdeksän, kymmenen.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-uk-fi")
print(pipe("Африка є колискою людства."))
# expected output: Afrikka on ihmiskunnan kehto.
```
## Benchmarks
* test set translations: [opusTCv20210807+pft+pbt_transformer-align_2022-03-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-fin/opusTCv20210807+pft+pbt_transformer-align_2022-03-17.test.txt)
* test set scores: [opusTCv20210807+pft+pbt_transformer-align_2022-03-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-fin/opusTCv20210807+pft+pbt_transformer-align_2022-03-17.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| ukr-fin | flores101-devtest | 0.54827 | 19.6 | 1012 | 18781 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 09:10:42 EET 2022
* port machine: LM0-400-22516.local
|
athiban2001/cord-scibert | 50d2e6a9ca0efb5d511d5e6df94948a3817327a8 | 2022-03-25T05:17:09.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | athiban2001 | null | athiban2001/cord-scibert | 5 | null | transformers | 16,997 | ---
license: mit
---
|
elihoole/distilgpt2-music-search | 4003f257348c26832ccb3aa2380c276372df0660 | 2022-03-24T08:17:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | elihoole | null | elihoole/distilgpt2-music-search | 5 | null | transformers | 16,998 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-music-search
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-music-search
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 256 | 4.6572 |
| 5.0184 | 2.0 | 512 | 4.6461 |
| 5.0184 | 3.0 | 768 | 4.6516 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.7.1
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Helsinki-NLP/opus-mt-tc-base-zle-bat | 7340f670bf424c307a2f551bba1526f5059652e4 | 2022-06-01T13:09:59.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bat",
"lt",
"lv",
"ru",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-base-zle-bat | 5 | null | transformers | 16,999 | ---
language:
- bat
- lt
- lv
- ru
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-base-zle-bat
results:
- task:
name: Translation rus-lav
type: translation
args: rus-lav
dataset:
name: flores101-devtest
type: flores_101
args: rus lav devtest
metrics:
- name: BLEU
type: bleu
value: 20.0
- task:
name: Translation rus-lit
type: translation
args: rus-lit
dataset:
name: flores101-devtest
type: flores_101
args: rus lit devtest
metrics:
- name: BLEU
type: bleu
value: 20.6
- task:
name: Translation ukr-lav
type: translation
args: ukr-lav
dataset:
name: flores101-devtest
type: flores_101
args: ukr lav devtest
metrics:
- name: BLEU
type: bleu
value: 21.4
- task:
name: Translation ukr-lit
type: translation
args: ukr-lit
dataset:
name: flores101-devtest
type: flores_101
args: ukr lit devtest
metrics:
- name: BLEU
type: bleu
value: 20.5
- task:
name: Translation rus-lav
type: translation
args: rus-lav
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-lav
metrics:
- name: BLEU
type: bleu
value: 55.3
- task:
name: Translation rus-lit
type: translation
args: rus-lit
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-lit
metrics:
- name: BLEU
type: bleu
value: 47.2
---
# opus-mt-tc-base-zle-bat
Neural machine translation model for translating from East Slavic languages (zle) to Baltic languages (bat).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-14
* source language(s): rus
* target language(s): lav lit
* valid target language labels: >>lav<< >>lit<<
* model: transformer-align
* data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807_transformer-align_2022-03-14.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-bat/opusTCv20210807_transformer-align_2022-03-14.zip)
* more information released models: [OPUS-MT zle-bat README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-bat/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>lav<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>lav<< Африка - колыбель человечества.",
">>lit<< Том — наш капітан."
]
model_name = "pytorch-models/opus-mt-tc-base-zle-bat"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Āfrika ir cilvēces šūpulis.
# Tomas yra mūsų kapitonas.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-zle-bat")
print(pipe(">>lav<< Африка - колыбель человечества."))
# expected output: Āfrika ir cilvēces šūpulis.
```
## Benchmarks
* test set translations: [opusTCv20210807_transformer-align_2022-03-14.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-bat/opusTCv20210807_transformer-align_2022-03-14.test.txt)
* test set scores: [opusTCv20210807_transformer-align_2022-03-14.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-bat/opusTCv20210807_transformer-align_2022-03-14.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| rus-lav | tatoeba-test-v2021-08-07 | 0.74223 | 55.3 | 274 | 1518 |
| rus-lit | tatoeba-test-v2021-08-07 | 0.70795 | 47.2 | 3598 | 20662 |
| rus-lav | flores101-devtest | 0.50134 | 20.0 | 1012 | 22092 |
| rus-lit | flores101-devtest | 0.53732 | 20.6 | 1012 | 20695 |
| ukr-lav | flores101-devtest | 0.51379 | 21.4 | 1012 | 22092 |
| ukr-lit | flores101-devtest | 0.54085 | 20.5 | 1012 | 20695 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Wed Mar 23 22:11:57 EET 2022
* port machine: LM0-400-22516.local
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.