modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
moaiz237/wav2vec2-base-timit-moaiz_exp2_new | moaiz237 | 2022-04-30T20:03:49Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-30T19:19:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-moaiz_exp2_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-moaiz_exp2_new
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6849
- Wer: 0.5396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.1266 | 13.89 | 500 | 1.0233 | 0.7034 |
| 0.5928 | 27.78 | 1000 | 0.6849 | 0.5396 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
julycodes/wav2vec2-base-timit-demo-colab-2 | julycodes | 2022-04-30T18:57:05Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-30T15:53:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab-2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7429
- Wer: 0.5080
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 900
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.984 | 8.77 | 500 | 0.9028 | 0.7036 |
| 0.6412 | 17.54 | 1000 | 0.7275 | 0.5868 |
| 0.3073 | 26.32 | 1500 | 0.7429 | 0.5080 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
julycodes/wav2vec2-base-timit-demo-colab-3 | julycodes | 2022-04-30T18:32:37Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-30T16:19:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab-3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6622
- Wer: 0.5082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.2195 | 8.77 | 500 | 0.9187 | 0.6635 |
| 0.5996 | 17.54 | 1000 | 0.6569 | 0.5347 |
| 0.2855 | 26.32 | 1500 | 0.6622 | 0.5082 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ParanoidAndroid/bert-finetuned-squad | ParanoidAndroid | 2022-04-30T18:29:58Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-04-30T18:16:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
tahazakir/wav2vec2-base-timit-demo-colab0 | tahazakir | 2022-04-30T18:01:33Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-30T15:37:39Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab0
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8768
- Wer: 0.6089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1121 | 13.89 | 500 | 2.9931 | 1.0 |
| 1.1475 | 27.78 | 1000 | 0.8768 | 0.6089 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
maxime7770/model | maxime7770 | 2022-04-30T15:12:40Z | 5 | 0 | transformers | [
"transformers",
"tf",
"camembert",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-29T11:54:14Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: maxime7770/model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# maxime7770/model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1211
- Validation Loss: 0.4812
- Epoch: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 650, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5966 | 1.5898 | 0 |
| 1.5577 | 1.5576 | 1 |
| 1.5034 | 1.4761 | 2 |
| 1.4034 | 1.3538 | 3 |
| 1.2864 | 1.2163 | 4 |
| 1.1502 | 1.0980 | 5 |
| 1.0085 | 0.9988 | 6 |
| 0.8828 | 0.9130 | 7 |
| 0.7863 | 0.8445 | 8 |
| 0.7036 | 0.7871 | 9 |
| 0.6322 | 0.7399 | 10 |
| 0.5731 | 0.7030 | 11 |
| 0.5180 | 0.6714 | 12 |
| 0.4757 | 0.6432 | 13 |
| 0.4366 | 0.6204 | 14 |
| 0.4057 | 0.6006 | 15 |
| 0.3743 | 0.5827 | 16 |
| 0.3475 | 0.5689 | 17 |
| 0.3221 | 0.5577 | 18 |
| 0.2971 | 0.5467 | 19 |
| 0.2815 | 0.5372 | 20 |
| 0.2700 | 0.5297 | 21 |
| 0.2521 | 0.5225 | 22 |
| 0.2343 | 0.5168 | 23 |
| 0.2265 | 0.5117 | 24 |
| 0.2143 | 0.5074 | 25 |
| 0.2063 | 0.5038 | 26 |
| 0.1941 | 0.5001 | 27 |
| 0.1843 | 0.4976 | 28 |
| 0.1782 | 0.4949 | 29 |
| 0.2012 | 0.4938 | 30 |
| 0.1691 | 0.4930 | 31 |
| 0.1626 | 0.4910 | 32 |
| 0.1884 | 0.4886 | 33 |
| 0.1547 | 0.4870 | 34 |
| 0.1492 | 0.4858 | 35 |
| 0.1445 | 0.4850 | 36 |
| 0.1415 | 0.4842 | 37 |
| 0.1383 | 0.4836 | 38 |
| 0.1374 | 0.4832 | 39 |
| 0.1336 | 0.4826 | 40 |
| 0.1322 | 0.4823 | 41 |
| 0.1295 | 0.4820 | 42 |
| 0.1268 | 0.4818 | 43 |
| 0.1261 | 0.4816 | 44 |
| 0.1253 | 0.4815 | 45 |
| 0.1275 | 0.4814 | 46 |
| 0.1247 | 0.4812 | 47 |
| 0.1256 | 0.4812 | 48 |
| 0.1211 | 0.4812 | 49 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ahmad573/wav2vec2-base-timit-demo-colab | ahmad573 | 2022-04-30T15:09:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-27T11:53:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5827
- Wer: 0.4147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4314 | 7.04 | 500 | 0.5453 | 0.4922 |
| 0.2357 | 14.08 | 1000 | 0.5573 | 0.4376 |
| 0.1283 | 21.13 | 1500 | 0.5827 | 0.4147 |
| 0.1169 | 28.17 | 2000 | 0.5827 | 0.4147 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Davincilee/door_inner | Davincilee | 2022-04-30T15:07:38Z | 0 | 1 | null | [
"region:us"
] | null | 2022-04-30T14:47:04Z | language:
- "List of ISO 639-1 code for your language" |
julycodes/wav2vec2-base-timit-demo-colab | julycodes | 2022-04-30T14:40:55Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-30T11:54:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6574
- Wer: 0.5652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.6258 | 8.77 | 500 | 3.1693 | 1.0 |
| 1.4137 | 17.54 | 1000 | 0.6574 | 0.5652 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Volodia/distilbert-base-uncased-finetuned-emotion | Volodia | 2022-04-30T13:45:47Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-30T13:25:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- name: F1
type: f1
value: 0.9280089473757943
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2102
- Accuracy: 0.928
- F1: 0.9280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8028 | 1.0 | 250 | 0.2998 | 0.913 | 0.9117 |
| 0.2314 | 2.0 | 500 | 0.2102 | 0.928 | 0.9280 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
learningdude/wav2vec2-base-finetuned-ks | learningdude | 2022-04-30T13:35:56Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2022-04-30T07:56:06Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0834
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6111 | 1.0 | 399 | 0.5123 | 0.9388 |
| 0.2901 | 2.0 | 798 | 0.1725 | 0.9782 |
| 0.1916 | 3.0 | 1197 | 0.1060 | 0.9834 |
| 0.1754 | 4.0 | 1596 | 0.0891 | 0.9829 |
| 0.1384 | 5.0 | 1995 | 0.0834 | 0.9840 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 1.14.0
- Tokenizers 0.12.1
|
sameearif88/wav2vec2-base-timit-demo-colab | sameearif88 | 2022-04-30T13:08:28Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-26T10:31:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
huggingtweets/itstomrobinson | huggingtweets | 2022-04-30T07:06:15Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-30T06:45:28Z | ---
language: en
thumbnail: http://www.huggingtweets.com/itstomrobinson/1651302371165/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1388470365723168770/irz46Ykl_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tom Robinson</div>
<div style="text-align: center; font-size: 14px;">@itstomrobinson</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tom Robinson.
| Data | Tom Robinson |
| --- | --- |
| Tweets downloaded | 733 |
| Retweets | 40 |
| Short tweets | 52 |
| Tweets kept | 641 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bluc7sk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itstomrobinson's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ryc26oz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ryc26oz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itstomrobinson')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
obokkkk/mt5-base_2 | obokkkk | 2022-04-30T05:52:12Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-29T06:50:46Z | ---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-base_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base_2
This model is a fine-tuned version of [obokkkk/mt5-base](https://huggingface.co/obokkkk/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1742
- Bleu: 9.479
- Gen Len: 16.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 183 | 1.1834 | 9.3761 | 16.9129 |
| No log | 2.0 | 366 | 1.1791 | 9.422 | 16.9334 |
| 1.3969 | 3.0 | 549 | 1.1764 | 9.4432 | 16.9082 |
| 1.3969 | 4.0 | 732 | 1.1749 | 9.461 | 16.9157 |
| 1.3969 | 5.0 | 915 | 1.1742 | 9.479 | 16.9226 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BigSalmon/CoverLetter | BigSalmon | 2022-04-30T01:42:48Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-30T01:36:51Z | how to do initial prompt:
captivated by [Enter Company Name]'s
also trained on: https://huggingface.co/BigSalmon/InformalToFormalLincoln40 (so you can use those prompt outlines, too) |
tonydiana1/distilroberta-base-finetuned-wikitext2 | tonydiana1 | 2022-04-30T01:23:18Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-04-30T01:01:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0853 | 1.0 | 2406 | 1.9214 |
| 1.986 | 2.0 | 4812 | 1.8799 |
| 1.9568 | 3.0 | 7218 | 1.8202 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Siddhart/t5-small-finetuned-xsum | Siddhart | 2022-04-30T00:04:50Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-29T23:51:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 23 | 2.7230 | 33.2094 | 14.0331 | 28.4433 | 29.4644 | 18.8947 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
stas/tiny-m2m_100 | stas | 2022-04-29T23:57:25Z | 1,370 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"testing",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-29T23:50:29Z | ---
language:
- en
thumbnail:
tags:
- testing
license: apache-2.0
---
# Tiny M2M100 model
This is a tiny model that is used in the `transformers` test suite. It doesn't do anything useful beyond functional testing.
Do not try to use it for anything that requires quality.
The model is indeed 4MB in size.
You can see how it was created [here](https://huggingface.co/stas/tiny-m2m_100/blob/main/m2m-make-tiny-model.py)
If you're looking for the real model, please go to [https://huggingface.co/facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M).
|
dhlanm/distilbert-base-uncased-finetune | dhlanm | 2022-04-29T23:47:22Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-29T22:16:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetune
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1315
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 48 | 0.1349 | 0.0 | 0.0 | 0.0 | 0.9715 |
| No log | 2.0 | 96 | 0.1318 | 0.0 | 0.0 | 0.0 | 0.9715 |
| No log | 3.0 | 144 | 0.1315 | 0.0 | 0.0 | 0.0 | 0.9715 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
doc2query/msmarco-vietnamese-mt5-base-v1 | doc2query | 2022-04-29T22:06:03Z | 18 | 4 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"vi",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-29T22:05:47Z | ---
language: vi
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python (phát âm tiếng Anh: /ˈpaɪθɑːn/) là một ngôn ngữ lập trình bậc cao cho các mục đích lập trình đa năng, do Guido van Rossum tạo ra và lần đầu ra mắt vào năm 1991. Python được thiết kế với ưu điểm mạnh là dễ đọc, dễ học và dễ nhớ. Python là ngôn ngữ có hình thức rất sáng sủa, cấu trúc rõ ràng, thuận tiện cho người mới học lập trình và là ngôn ngữ lập trình dễ học; được dùng rộng rãi trong phát triển trí tuệ nhân tạo. Cấu trúc của Python còn cho phép người sử dụng viết mã lệnh với số lần gõ phím tối thiểu. Vào tháng 7 năm 2018, van Rossum đã từ chức lãnh đạo trong cộng đồng ngôn ngữ Python sau 30 năm làm việc."
license: apache-2.0
---
# doc2query/msmarco-vietnamese-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-vietnamese-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python (phát âm tiếng Anh: /ˈpaɪθɑːn/) là một ngôn ngữ lập trình bậc cao cho các mục đích lập trình đa năng, do Guido van Rossum tạo ra và lần đầu ra mắt vào năm 1991. Python được thiết kế với ưu điểm mạnh là dễ đọc, dễ học và dễ nhớ. Python là ngôn ngữ có hình thức rất sáng sủa, cấu trúc rõ ràng, thuận tiện cho người mới học lập trình và là ngôn ngữ lập trình dễ học; được dùng rộng rãi trong phát triển trí tuệ nhân tạo. Cấu trúc của Python còn cho phép người sử dụng viết mã lệnh với số lần gõ phím tối thiểu. Vào tháng 7 năm 2018, van Rossum đã từ chức lãnh đạo trong cộng đồng ngôn ngữ Python sau 30 năm làm việc."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
espnet/turkish_commonvoice_blstm | espnet | 2022-04-29T21:33:48Z | 0 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"tr",
"dataset:commonvoice",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-04-29T21:32:59Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: tr
datasets:
- commonvoice
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/turkish_commonvoice_blstm`
This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b
pip install -e .
cd egs2/commonvoice/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/turkish_commonvoice_blstm
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sat Apr 16 17:16:06 EDT 2022`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `5e6e95d087af8a7a4c33c4248b75114237eae64b`
- Commit date: `Mon Apr 4 21:04:45 2022 -0400`
## asr_tr_50_epoch_lr_0.1
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_tr|8339|43647|78.5|19.6|2.0|1.6|23.1|50.9|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_tr|8339|306849|94.3|3.2|2.5|1.1|6.8|50.9|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_tr|8339|203431|91.0|5.8|3.2|1.3|10.3|50.6|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_rnn_tr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_tr_50_epoch_lr_0.1
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models:
- 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_tr_bpe150_sp/train/speech_shape
- exp/asr_stats_raw_tr_bpe150_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_tr_bpe150_sp/valid/speech_shape
- exp/asr_stats_raw_tr_bpe150_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_tr_sp/wav.scp
- speech
- sound
- - dump/raw/train_tr_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_tr/wav.scp
- speech
- sound
- - dump/raw/dev_tr/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adadelta
optim_conf:
lr: 0.1
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ▁
- R
- K
- E
- .
- I
- N
- L
- ı
- A
- M
- T
- U
- Y
- S
- Z
- ş
- ü
- O
- ▁A
- ç
- DI
- MA
- IN
- ▁BU
- LA
- ','
- H
- RA
- LAR
- ▁BIR
- DE
- ME
- ö
- '?'
- Dı
- DA
- AN
- ▁KA
- LI
- LER
- F
- LE
- EN
- P
- B
- V
- DU
- YE
- UN
- ▁G
- TE
- ▁BE
- BI
- YA
- KI
- Tı
- BA
- ▁OL
- TI
- ▁DE
- ▁HA
- ▁YA
- ıN
- AR
- IM
- Sı
- D
- Lı
- ER
- C
- ▁S
- NA
- üN
- IYOR
- ▁NE
- ▁I
- ▁O
- ▁SA
- ▁"
- ▁DA
- SI
- G
- ▁P
- TA
- ▁SE
- ▁VE
- KA
- ''''
- UM
- DEN
- ▁GE
- Dü
- ."
- ıYOR
- ▁TA
- '!'
- CE
- VA
- ▁HE
- UZ
- GI
- ıNDA
- ıNı
- ▁MI
- LAN
- ▁BAş
- ▁ON
- CA
- İ
- DAN
- SIN
- '...'
- ▁DO
- ▁GöR
- ▁KO
- ▁VAR
- ACAK
- ▁GEL
- ▁YAP
- ▁SON
- ▁ET
- ▁IKI
- Ç
- Ş
- '"'
- J
- Ö
- ':'
- â
- Ü
- ;
- '-'
- W
- X
- ’
- ”
- ‘
- î
- ë
- Q
- (
- Â
- û
- “
- )
- ğ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.5
use_preprocessor: true
token_type: bpe
bpemodel: data/tr_token_list/bpe_unigram150/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_tr_bpe150_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: vgg_rnn
encoder_conf:
rnn_type: lstm
bidirectional: true
use_projection: true
num_layers: 4
hidden_size: 1024
output_size: 1024
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf:
num_layers: 2
hidden_size: 1024
sampling_probability: 0
att_conf:
atype: location
adim: 1024
aconv_chans: 10
aconv_filts: 100
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
espnet/french_commonvoice_blstm | espnet | 2022-04-29T21:22:54Z | 0 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"fr",
"dataset:commonvoice",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-04-29T21:22:08Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: fr
datasets:
- commonvoice
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/french_commonvoice_blstm`
This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b
pip install -e .
cd egs2/commonvoice/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/french_commonvoice_blstm
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri Apr 29 17:20:37 EDT 2022`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `716eb8f92e19708acfd08ba3bd39d40890d3a84b`
- Commit date: `Thu Apr 28 19:50:59 2022 -0400`
## asr_train_asr_rnn_raw_fr_bpe350_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.best/test_fr|15621|151227|75.1|22.6|2.3|2.3|27.2|81.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.best/test_fr|15621|952803|92.9|3.6|3.5|2.0|9.1|81.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.best/test_fr|15621|730898|89.9|6.5|3.6|1.9|12.0|81.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_rnn.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_rnn_raw_fr_bpe350_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models:
- 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 30
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_fr_bpe350_sp/train/speech_shape
- exp/asr_stats_raw_fr_bpe350_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_fr_bpe350_sp/valid/speech_shape
- exp/asr_stats_raw_fr_bpe350_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_fr_sp/wav.scp
- speech
- sound
- - dump/raw/train_fr_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_fr/wav.scp
- speech
- sound
- - dump/raw/dev_fr/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adadelta
optim_conf:
lr: 0.1
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- S
- ▁
- E
- I
- T
- A
- U
- O
- .
- L
- R
- é
- P
- C
- V
- 'ON'
- M
- ▁DE
- ','
- N
- ▁S
- D
- IN
- ''''
- OU
- ▁D
- G
- IS
- ▁P
- ER
- ▁C
- ▁L
- ▁LA
- B
- ▁"
- ▁A
- RE
- AN
- ."
- ▁M
- ▁F
- '-'
- F
- ▁T
- ES
- ENT
- ▁LE
- EN
- IT
- LE
- ▁N
- è
- H
- ’
- Y
- X
- Z
- K
- J
- ê
- '?'
- '!'
- É
- ç
- W
- à
- ô
- â
- Q
- î
- À
- '"'
- œ
- û
- ù
- ï
- ':'
- ;
- —
- È
- «
- »
- Ç
- Ê
- ë
- á
- ü
- í
- ö
- ó
- )
- Î
- Â
- ō
- ä
- –
- Ô
- ć
- š
- '&'
- ñ
- '='
- ł
- č
- Û
- ú
- ū
- ø
- ā
- ã
- ă
- /
- ń
- _
- ș
- å
- æ
- °
- ß
- “
- ”
- ž
- ı
- Œ
- Ö
- ř
- Š
- ý
- Ō
- ‘
- ş
- ·
- o
- ę
- ÿ
- Å
- ą
- ð
- ī
- ò
- ż
- ě
- ś
- '`'
- Ë
- ì
- ē
- ğ
- İ
- '*'
- Í
- ė
- Ó
- ő
- đ
- ʻ
- Ü
- õ
- Ä
- ņ
- ṣ
- '|'
- ʾ
- π
- Ā
- σ
- '%'
- ả
- κ
- ʼ
- ň
- Ú
- ļ
- ư
- '1'
- '2'
- '}'
- ĩ
- Ҫ
- ا
- ầ
- ⁄
- ṇ
- þ
- ǎ
- ο
- ′
- s
- §
- ľ
- ǹ
- Ʉ
- ː
- ̱
- γ
- ν
- ن
- ạ
- ễ
- ộ
- ≥
- 星
- ề
- ṯ
- τ
- δ
- Δ
- Ț
- Ș
- Ū
- Ř
- ∆
- →
- ệ
- Г
- ơ
- ţ
- Þ
- Ñ
- ±
- ť
- ŏ
- €
- „
- ʿ
- Ć
- £
- α
- Ż
- Ş
- β
- ź
- Đ
- Ø
- Ś
- Ž
- Æ
- $
- Ï
- Ł
- ț
- Č
- Á
- ́
- Ù
- Μ
- ι
- ρ
- ό
- И
- з
- 京
- 北
- ď
- Ġ
- Ṭ
- −
- ☉
- '~'
- ®
- Ì
- Ò
- Õ
- ×
- ħ
- ĺ
- Ľ
- ũ
- ů
- Ų
- ǃ
- ǔ
- ̠
- ̲
- Κ
- Π
- ε
- ζ
- μ
- ς
- υ
- ψ
- І
- Ј
- А
- Е
- П
- а
- е
- м
- н
- Գ
- Զ
- ب
- د
- ر
- ل
- و
- ي
- ወ
- ደ
- ḍ
- ṅ
- ṭ
- ậ
- ắ
- ẵ
- ị
- ồ
- ờ
- ợ
- ủ
- ‐
- ―
- †
- ‹
- ›
- ₽
- ∈
- ∞
- ─
- い
- う
- た
- つ
- へ
- ま
- め
- や
- ゔ
- 扬
- 术
- 美
- 貴
- 青
- 馆
- Ꝑ
- ̐
- Ω
- ử
- ỳ
- ∨
- 乃
- 杜
- (
- Ē
- ǫ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.5
use_preprocessor: true
token_type: bpe
bpemodel: data/fr_token_list/bpe_unigram350/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_fr_bpe350_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: vgg_rnn
encoder_conf:
rnn_type: lstm
bidirectional: true
use_projection: true
num_layers: 4
hidden_size: 1024
output_size: 1024
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf:
num_layers: 2
hidden_size: 1024
sampling_probability: 0
att_conf:
atype: location
adim: 1024
aconv_chans: 10
aconv_filts: 100
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
timhbach/Team_Gryffindor_NER | timhbach | 2022-04-29T21:13:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-11T07:08:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Team_Gryffindor_NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Team-Gryffindor-distilbert-base-finetuned-NER-creditcardcontract-100epoch
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the Credit card agreement dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0470
- Precision: 0.7319
- Recall: 0.7064
- F1: 0.7190
- Accuracy: 0.9920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0113 | 0.33 | 500 | 0.0443 | 0.6547 | 0.7028 | 0.6779 | 0.9908 |
| 0.0118 | 0.67 | 1000 | 0.0435 | 0.7207 | 0.6440 | 0.6802 | 0.9916 |
| 0.013 | 1.0 | 1500 | 0.0449 | 0.7113 | 0.6826 | 0.6966 | 0.9918 |
| 0.0113 | 1.34 | 2000 | 0.0434 | 0.7213 | 0.6697 | 0.6946 | 0.9915 |
| 0.0121 | 1.67 | 2500 | 0.0467 | 0.6955 | 0.6789 | 0.6871 | 0.9914 |
| 0.0125 | 2.01 | 3000 | 0.0417 | 0.7095 | 0.6991 | 0.7043 | 0.9920 |
| 0.0106 | 2.34 | 3500 | 0.0437 | 0.7191 | 0.6624 | 0.6896 | 0.9918 |
| 0.0114 | 2.68 | 4000 | 0.0468 | 0.7165 | 0.6679 | 0.6914 | 0.9920 |
| 0.0125 | 3.01 | 4500 | 0.0431 | 0.6888 | 0.6862 | 0.6875 | 0.9917 |
| 0.0107 | 3.35 | 5000 | 0.0446 | 0.7184 | 0.6459 | 0.6802 | 0.9913 |
| 0.0096 | 3.68 | 5500 | 0.0485 | 0.6926 | 0.6532 | 0.6723 | 0.9912 |
| 0.013 | 4.02 | 6000 | 0.0448 | 0.6134 | 0.6697 | 0.6404 | 0.9907 |
| 0.0102 | 4.35 | 6500 | 0.0497 | 0.6895 | 0.6642 | 0.6766 | 0.9913 |
| 0.0112 | 4.69 | 7000 | 0.0464 | 0.6759 | 0.6697 | 0.6728 | 0.9910 |
| 0.0117 | 5.02 | 7500 | 0.0484 | 0.7451 | 0.6275 | 0.6813 | 0.9916 |
| 0.0114 | 5.36 | 8000 | 0.0411 | 0.7086 | 0.6826 | 0.6953 | 0.9919 |
| 0.0108 | 5.69 | 8500 | 0.0443 | 0.7041 | 0.6679 | 0.6855 | 0.9916 |
| 0.0109 | 6.03 | 9000 | 0.0470 | 0.7228 | 0.6697 | 0.6952 | 0.9916 |
| 0.0099 | 6.36 | 9500 | 0.0471 | 0.7253 | 0.6881 | 0.7062 | 0.9913 |
| 0.0103 | 6.7 | 10000 | 0.0430 | 0.6986 | 0.7101 | 0.7043 | 0.9914 |
| 0.0117 | 7.03 | 10500 | 0.0462 | 0.7327 | 0.6991 | 0.7155 | 0.9918 |
| 0.0098 | 7.37 | 11000 | 0.0483 | 0.6910 | 0.6771 | 0.6840 | 0.9914 |
| 0.0107 | 7.7 | 11500 | 0.0468 | 0.7189 | 0.6899 | 0.7041 | 0.9916 |
| 0.0119 | 8.04 | 12000 | 0.0434 | 0.6970 | 0.6881 | 0.6925 | 0.9918 |
| 0.0112 | 8.37 | 12500 | 0.0469 | 0.7007 | 0.6917 | 0.6962 | 0.9918 |
| 0.011 | 8.71 | 13000 | 0.0469 | 0.6736 | 0.6514 | 0.6623 | 0.9914 |
| 0.0101 | 9.04 | 13500 | 0.0451 | 0.6691 | 0.6606 | 0.6648 | 0.9913 |
| 0.0099 | 9.38 | 14000 | 0.0462 | 0.7006 | 0.6826 | 0.6914 | 0.9918 |
| 0.0107 | 9.71 | 14500 | 0.0444 | 0.6840 | 0.6752 | 0.6796 | 0.9915 |
| 0.0118 | 10.05 | 15000 | 0.0457 | 0.7015 | 0.6771 | 0.6891 | 0.9918 |
| 0.0102 | 10.38 | 15500 | 0.0500 | 0.7413 | 0.6679 | 0.7027 | 0.9919 |
| 0.0107 | 10.72 | 16000 | 0.0470 | 0.7319 | 0.7064 | 0.7190 | 0.9920 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Nadhiya/distilbert-base-uncased-finetuned-squad | Nadhiya | 2022-04-29T18:20:29Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-04-24T20:58:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 54 | 5.8535 |
| No log | 2.0 | 108 | 6.4469 |
| No log | 3.0 | 162 | 6.6023 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nikhedward/bart-large-cnn-finetuned-multi-news | nikhedward | 2022-04-29T15:22:47Z | 14 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:multi_news",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-13T04:36:34Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-multi-news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: multi_news
type: multi_news
args: default
metrics:
- name: Rouge1
type: rouge
value: 42.0423
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-multi-news
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the multi_news dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0950
- Rouge1: 42.0423
- Rouge2: 14.8812
- Rougel: 23.3412
- Rougelsum: 36.2613
## Model description
bart-large-cnn fine tuned on sample of multi-news dataset
## Intended uses & limitations
The intended use of the model is for downstream summarization tasks but it's limited to input text 1024 words. Any text longer than that would be truncated.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.2037 | 1.0 | 750 | 2.0950 | 42.0423 | 14.8812 | 23.3412 | 36.2613 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/cokedupoptions-greg16676935420-parikpatelcfa | huggingtweets | 2022-04-29T15:09:43Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-29T07:44:08Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1514648481281056772/ACunKh0I_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484924573032148993/qdB7hbSU_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1341030286386192386/TzEiVCaJ_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">greg & John W. Rich (Fake Tech Exec) & Dr. Parik Patel, BA, CFA, ACCA Esq. (drpatel.eth)</div>
<div style="text-align: center; font-size: 14px;">@cokedupoptions-greg16676935420-parikpatelcfa</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from greg & John W. Rich (Fake Tech Exec) & Dr. Parik Patel, BA, CFA, ACCA Esq. (drpatel.eth).
| Data | greg | John W. Rich (Fake Tech Exec) | Dr. Parik Patel, BA, CFA, ACCA Esq. (drpatel.eth) |
| --- | --- | --- | --- |
| Tweets downloaded | 3247 | 3247 | 3250 |
| Retweets | 27 | 202 | 22 |
| Short tweets | 664 | 331 | 719 |
| Tweets kept | 2556 | 2714 | 2509 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/snhk0760/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cokedupoptions-greg16676935420-parikpatelcfa's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/iresidwo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/iresidwo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cokedupoptions-greg16676935420-parikpatelcfa')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Goud/DarijaBERT-summarization-goud | Goud | 2022-04-29T15:07:03Z | 17 | 2 | transformers | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"summarization",
"dataset:Goud/Goud-sum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-04-20T22:37:47Z | ---
datasets:
- Goud/Goud-sum
language:
- "Moroccan Arabic (MA)"
- "Modern Standard Arabic (MSA)"
metrics:
- rouge
tags:
- summarization
widget:
-
text: "توصل الاتحاد الأوروبي، في وقت مبكر من اليوم السبت، إلى اتفاق تاريخي يستهدف خطاب الكراهية والمعلومات المضللة والمحتويات الضارة الأخرى الموجودة على شبكة الإنترنيت. وحسب تقارير صحفية، سيجبر القانون شركات التكنولوجيا الكبرى على مراقبة نفسها بشكل أكثر صرامة، ويسهل على المستخدمين الإبلاغ عن المشاكل، ويمكن الاتفاق المنظمين من معاقبة الشركات غير الممتثلة بغرامات تقدر بالملايير. ويركز الاتفاق على قواعد جديدة تتطلب من شركات التكنولوجيا العملاقة بذل المزيد من الجهد لمراقبة المحتوى على منصاتها ودفع رسوم للجهات المنظمة التي تراقب مدى امتثالها. ويعد قانون الخدمات الرقمية الشق الثاني من إستراتيجية المفوضة الأوروبية لشؤون المنافسة، مارغريت فيستاغر، للحد من هيمنة وحدة غوغل التابعة لألفابت، وميتا (فيسبوك سابقا) وغيرهما من شركات التكنولوجيا الأمريكية العملاقة. وقالت فيستاغر في تغريدة “توصلنا إلى اتفاق بشأن قانون الخدمات الرقمية، موضحة أن القانون سيضمن أن ما يعتبر غير قانوني في حالة عدم الاتصال بالشبكة ينظر إليه أيضا ويتم التعامل معه على أنه غير قانوني عبر الشبكة (الإنترنت) – ليس كشعار (ولكن) كواقع”. وتواجه الشركات بموجب قانون الخدمات الرقمية غرامات تصل إلى 6 في المائة من إجمالي عملياتها على مستوى العالم لانتهاك القواعد بينما قد تؤدي الانتهاكات المتكررة إلى حظرها من ممارسة أعمالها في الاتحاد الأوروبي. وأيدت دول الاتحاد والمشرعون الشهر الماضي القواعد التي طرحتها فيستاغر والمسماة قانون الأسواق الرقمية التي قد تجبر غوغل وأمازون وأبل وميتا وميكروسوفت على تغيير ممارساتها الأساسية في أوروبا. "
---
This model was introduced in [this paper](https://openreview.net/forum?id=BMVq5MELb9). It is an encoder-decoder model that was initialized with [DarijaBERT](https://huggingface.co/Kamel/DarijaBERT) checkpoint. The model is finetuned for text summarization on [Goud dataset](https://huggingface.co/datasets/Goud/Goud-sum).
## How to use
This is how you can use this model
```python
from transformers import EncoderDecoderModel, BertTokenizer
article = """توصل الاتحاد الأوروبي، في وقت مبكر من اليوم السبت، إلى اتفاق تاريخي يستهدف خطاب الكراهية والمعلومات المضللة والمحتويات الضارة الأخرى الموجودة على شبكة الإنترنيت.
وحسب تقارير صحفية، سيجبر القانون شركات التكنولوجيا الكبرى على مراقبة نفسها بشكل أكثر صرامة، ويسهل على المستخدمين الإبلاغ عن المشاكل، ويمكن الاتفاق المنظمين من معاقبة الشركات غير الممتثلة بغرامات تقدر بالملايير.
ويركز الاتفاق على قواعد جديدة تتطلب من شركات التكنولوجيا العملاقة بذل المزيد من الجهد لمراقبة المحتوى على منصاتها ودفع رسوم للجهات المنظمة التي تراقب مدى امتثالها.
ويعد قانون الخدمات الرقمية الشق الثاني من إستراتيجية المفوضة الأوروبية لشؤون المنافسة، مارغريت فيستاغر، للحد من هيمنة وحدة غوغل التابعة لألفابت، وميتا (فيسبوك سابقا) وغيرهما من شركات التكنولوجيا الأمريكية العملاقة.
وقالت فيستاغر في تغريدة “توصلنا إلى اتفاق بشأن قانون الخدمات الرقمية، موضحة أن القانون سيضمن أن ما يعتبر غير قانوني في حالة عدم الاتصال بالشبكة ينظر إليه أيضا ويتم التعامل معه على أنه غير قانوني عبر الشبكة (الإنترنت) – ليس كشعار (ولكن) كواقع”.
وتواجه الشركات بموجب قانون الخدمات الرقمية غرامات تصل إلى 6 في المائة من إجمالي عملياتها على مستوى العالم لانتهاك القواعد بينما قد تؤدي الانتهاكات المتكررة إلى حظرها من ممارسة أعمالها في الاتحاد الأوروبي.
وأيدت دول الاتحاد والمشرعون الشهر الماضي القواعد التي طرحتها فيستاغر والمسماة قانون الأسواق الرقمية التي قد تجبر غوغل وأمازون وأبل وميتا وميكروسوفت على تغيير ممارساتها الأساسية في أوروبا.
"""
tokenizer = BertTokenizer.from_pretrained("Goud/DarijaBERT-summarization-goud")
model = EncoderDecoderModel.from_pretrained("Goud/DarijaBERT-summarization-goud")
input_ids = tokenizer(article, return_tensors="pt", truncation=True, padding=True).input_ids
generated = model.generate(input_ids)[0]
output = tokenizer.decode(generated, skip_special_tokens=True)
```
## Citation Information
```
@inproceedings{issam2022goudma,
title={Goud.ma: a News Article Dataset for Summarization in Moroccan Darija},
author={Abderrahmane Issam and Khalil Mrini},
booktitle={3rd Workshop on African Natural Language Processing},
year={2022},
url={https://openreview.net/forum?id=BMVq5MELb9}
}
``` |
Goud/DziriBERT-summarization-goud | Goud | 2022-04-29T15:06:30Z | 14 | 2 | transformers | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"summarization",
"dataset:Goud/Goud-sum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-04-20T22:16:15Z | ---
datasets:
- Goud/Goud-sum
language:
- "Moroccan Arabic (MA)"
- "Modern Standard Arabic (MSA)"
metrics:
- rouge
tags:
- summarization
widget:
-
text: "توصل الاتحاد الأوروبي، في وقت مبكر من اليوم السبت، إلى اتفاق تاريخي يستهدف خطاب الكراهية والمعلومات المضللة والمحتويات الضارة الأخرى الموجودة على شبكة الإنترنيت. وحسب تقارير صحفية، سيجبر القانون شركات التكنولوجيا الكبرى على مراقبة نفسها بشكل أكثر صرامة، ويسهل على المستخدمين الإبلاغ عن المشاكل، ويمكن الاتفاق المنظمين من معاقبة الشركات غير الممتثلة بغرامات تقدر بالملايير. ويركز الاتفاق على قواعد جديدة تتطلب من شركات التكنولوجيا العملاقة بذل المزيد من الجهد لمراقبة المحتوى على منصاتها ودفع رسوم للجهات المنظمة التي تراقب مدى امتثالها. ويعد قانون الخدمات الرقمية الشق الثاني من إستراتيجية المفوضة الأوروبية لشؤون المنافسة، مارغريت فيستاغر، للحد من هيمنة وحدة غوغل التابعة لألفابت، وميتا (فيسبوك سابقا) وغيرهما من شركات التكنولوجيا الأمريكية العملاقة. وقالت فيستاغر في تغريدة “توصلنا إلى اتفاق بشأن قانون الخدمات الرقمية، موضحة أن القانون سيضمن أن ما يعتبر غير قانوني في حالة عدم الاتصال بالشبكة ينظر إليه أيضا ويتم التعامل معه على أنه غير قانوني عبر الشبكة (الإنترنت) – ليس كشعار (ولكن) كواقع”. وتواجه الشركات بموجب قانون الخدمات الرقمية غرامات تصل إلى 6 في المائة من إجمالي عملياتها على مستوى العالم لانتهاك القواعد بينما قد تؤدي الانتهاكات المتكررة إلى حظرها من ممارسة أعمالها في الاتحاد الأوروبي. وأيدت دول الاتحاد والمشرعون الشهر الماضي القواعد التي طرحتها فيستاغر والمسماة قانون الأسواق الرقمية التي قد تجبر غوغل وأمازون وأبل وميتا وميكروسوفت على تغيير ممارساتها الأساسية في أوروبا. "
---
This model was introduced in [this paper](https://openreview.net/forum?id=BMVq5MELb9). It is an encoder-decoder model that was initialized with [DziriBERT](https://huggingface.co/alger-ia/dziribert) checkpoint. The model is finetuned for text summarization on [Goud dataset](https://huggingface.co/datasets/Goud/Goud-sum).
## How to use
This is how you can use this model
```python
from transformers import EncoderDecoderModel, BertTokenizer
article = """توصل الاتحاد الأوروبي، في وقت مبكر من اليوم السبت، إلى اتفاق تاريخي يستهدف خطاب الكراهية والمعلومات المضللة والمحتويات الضارة الأخرى الموجودة على شبكة الإنترنيت.
وحسب تقارير صحفية، سيجبر القانون شركات التكنولوجيا الكبرى على مراقبة نفسها بشكل أكثر صرامة، ويسهل على المستخدمين الإبلاغ عن المشاكل، ويمكن الاتفاق المنظمين من معاقبة الشركات غير الممتثلة بغرامات تقدر بالملايير.
ويركز الاتفاق على قواعد جديدة تتطلب من شركات التكنولوجيا العملاقة بذل المزيد من الجهد لمراقبة المحتوى على منصاتها ودفع رسوم للجهات المنظمة التي تراقب مدى امتثالها.
ويعد قانون الخدمات الرقمية الشق الثاني من إستراتيجية المفوضة الأوروبية لشؤون المنافسة، مارغريت فيستاغر، للحد من هيمنة وحدة غوغل التابعة لألفابت، وميتا (فيسبوك سابقا) وغيرهما من شركات التكنولوجيا الأمريكية العملاقة.
وقالت فيستاغر في تغريدة “توصلنا إلى اتفاق بشأن قانون الخدمات الرقمية، موضحة أن القانون سيضمن أن ما يعتبر غير قانوني في حالة عدم الاتصال بالشبكة ينظر إليه أيضا ويتم التعامل معه على أنه غير قانوني عبر الشبكة (الإنترنت) – ليس كشعار (ولكن) كواقع”.
وتواجه الشركات بموجب قانون الخدمات الرقمية غرامات تصل إلى 6 في المائة من إجمالي عملياتها على مستوى العالم لانتهاك القواعد بينما قد تؤدي الانتهاكات المتكررة إلى حظرها من ممارسة أعمالها في الاتحاد الأوروبي.
وأيدت دول الاتحاد والمشرعون الشهر الماضي القواعد التي طرحتها فيستاغر والمسماة قانون الأسواق الرقمية التي قد تجبر غوغل وأمازون وأبل وميتا وميكروسوفت على تغيير ممارساتها الأساسية في أوروبا.
"""
tokenizer = BertTokenizer.from_pretrained("Goud/DziriBERT-summarization-goud")
model = EncoderDecoderModel.from_pretrained("Goud/DziriBERT-summarization-goud")
input_ids = tokenizer(article, return_tensors="pt", truncation=True, padding=True).input_ids
generated = model.generate(input_ids)[0]
output = tokenizer.decode(generated, skip_special_tokens=True)
```
## Citation Information
```
@inproceedings{issam2022goudma,
title={Goud.ma: a News Article Dataset for Summarization in Moroccan Darija},
author={Abderrahmane Issam and Khalil Mrini},
booktitle={3rd Workshop on African Natural Language Processing},
year={2022},
url={https://openreview.net/forum?id=BMVq5MELb9}
}
``` |
KoboldAI/GPT-Neo-125M-AID | KoboldAI | 2022-04-29T14:48:16Z | 19 | 1 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | # GPT-Neo-125M-AID
This model was finetuned by Henk717 on Google Colab, it contains text adventure tuning and its the smallest 'Adventure' model of its size.
Because of its limited size the behavior is mostly suitable for testing text adventure gamemodes at fast speeds, for a coherent adventure you are better off using one of the 2.7B models. |
faisalahmad2/autotrain-nlp-text-summarization-by-faisal-793224456 | faisalahmad2 | 2022-04-29T14:05:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"en",
"dataset:faisalahmad2/autotrain-data-nlp-text-summarization-by-faisal",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-27T15:03:43Z | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- faisalahmad2/autotrain-data-nlp-text-summarization-by-faisal
co2_eq_emissions: 27.26671996544415
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 793224456
- CO2 Emissions (in grams): 27.26671996544415
## Validation Metrics
- Loss: 1.5189369916915894
- Rouge1: 38.7852
- Rouge2: 17.0785
- RougeL: 32.1082
- RougeLsum: 32.1103
- Gen Len: 18.7332
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/faisalahmad2/autotrain-nlp-text-summarization-by-faisal-793224456
``` |
huggingtweets/corpsecrusader | huggingtweets | 2022-04-29T13:57:10Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/corpsecrusader/1651240626010/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1515787050334801925/tyxpMmj1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Corpse Crusader 🫀🇫🇮 gamedev hours🧱🍐💨💪</div>
<div style="text-align: center; font-size: 14px;">@corpsecrusader</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Corpse Crusader 🫀🇫🇮 gamedev hours🧱🍐💨💪.
| Data | Corpse Crusader 🫀🇫🇮 gamedev hours🧱🍐💨💪 |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 405 |
| Short tweets | 658 |
| Tweets kept | 2181 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ogdqtie2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @corpsecrusader's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ecpg08j) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ecpg08j/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/corpsecrusader')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Ansh/keras-demo | Ansh | 2022-04-29T13:48:51Z | 1 | 0 | keras | [
"keras",
"tf-keras",
"bert",
"region:us"
] | null | 2022-04-29T12:55:31Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-05, 'decay': 1e-07, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
umarkhalid96/t5-small-train | umarkhalid96 | 2022-04-29T12:36:08Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2022-04-24T19:52:13Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-train
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2669
- Rouge1: 43.2372
- Rouge2: 21.6755
- Rougel: 38.1637
- Rougelsum: 38.5444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.2032 | 1.0 | 45 | 2.6305 | 34.393 | 15.4821 | 30.3601 | 30.5865 |
| 2.6291 | 2.0 | 90 | 2.4169 | 38.2327 | 18.4622 | 34.2887 | 34.3385 |
| 2.4294 | 3.0 | 135 | 2.3395 | 40.4405 | 19.927 | 36.559 | 36.8095 |
| 2.3191 | 4.0 | 180 | 2.3059 | 41.4214 | 20.4534 | 36.6399 | 36.9088 |
| 2.2949 | 5.0 | 225 | 2.2857 | 42.6906 | 21.1492 | 37.5557 | 37.8722 |
| 2.2591 | 6.0 | 270 | 2.2762 | 43.1598 | 21.6179 | 38.1235 | 38.5053 |
| 2.1722 | 7.0 | 315 | 2.2680 | 43.4447 | 21.8048 | 38.4077 | 38.7384 |
| 2.1993 | 8.0 | 360 | 2.2669 | 43.2372 | 21.6755 | 38.1637 | 38.5444 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nepp1d0/SingleBertModel-ProtBertfinetuned-smilesBindingDB | nepp1d0 | 2022-04-29T12:23:55Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-04-13T22:27:57Z | ---
tags:
- generated_from_trainer
model-index:
- name: SingleBertModel-ProtBertfinetuned-smilesBindingDB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SingleBertModel-ProtBertfinetuned-smilesBindingDB
This model is a fine-tuned version of [Rostlab/prot_bert](https://huggingface.co/Rostlab/prot_bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5245 | 1.0 | 10000 | nan |
| 2.5037 | 2.0 | 20000 | nan |
| 2.4967 | 3.0 | 30000 | nan |
| 2.4983 | 4.0 | 40000 | nan |
| 2.4926 | 5.0 | 50000 | nan |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
doc2query/msmarco-russian-mt5-base-v1 | doc2query | 2022-04-29T12:10:29Z | 21 | 8 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"ru",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-29T12:10:14Z | ---
language: ru
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python (МФА: [ˈpʌɪθ(ə)n]; в русском языке встречаются названия пито́н или па́йтон) — высокоуровневый язык программирования общего назначения с динамической строгой типизацией и автоматическим управлением памятью, ориентированный на повышение производительности разработчика, читаемости кода и его качества, а также на обеспечение переносимости написанных на нём программ."
license: apache-2.0
---
# doc2query/msmarco-russian-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-russian-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python (МФА: [ˈpʌɪθ(ə)n]; в русском языке встречаются названия пито́н или па́йтон) — высокоуровневый язык программирования общего назначения с динамической строгой типизацией и автоматическим управлением памятью, ориентированный на повышение производительности разработчика, читаемости кода и его качества, а также на обеспечение переносимости написанных на нём программ."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
huggan/stylegan_cat256 | huggan | 2022-04-29T12:01:40Z | 0 | 1 | null | [
"pytorch",
"gan",
"stylegan",
"huggan",
"unconditional-image-generation",
"license:apache-2.0",
"region:us"
] | unconditional-image-generation | 2022-04-18T21:54:15Z | ---
tags:
- gan
- stylegan
- huggan
- unconditional-image-generation
license: apache-2.0
---
The model provided is a StyleGAN generator trained on the LSUN cats dataset with a resolution of 256px. It is uploaded as part of porting this project: https://github.com/genforce/sefa to hugginface spaces. |
doc2query/msmarco-indonesian-mt5-base-v1 | doc2query | 2022-04-29T11:58:59Z | 23 | 2 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"id",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-29T11:58:44Z | ---
language: id
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python adalah bahasa pemrograman tujuan umum yang ditafsirkan, tingkat tinggi. Dibuat oleh Guido van Rossum dan pertama kali dirilis pada tahun 1991, filosofi desain Python menekankan keterbacaan kode dengan penggunaan spasi putih yang signifikan. Konstruksi bahasanya dan pendekatan berorientasi objek bertujuan untuk membantu pemrogram menulis kode yang jelas dan logis untuk proyek skala kecil dan besar."
license: apache-2.0
---
# doc2query/msmarco-indonesian-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-indonesian-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python adalah bahasa pemrograman tujuan umum yang ditafsirkan, tingkat tinggi. Dibuat oleh Guido van Rossum dan pertama kali dirilis pada tahun 1991, filosofi desain Python menekankan keterbacaan kode dengan penggunaan spasi putih yang signifikan. Konstruksi bahasanya dan pendekatan berorientasi objek bertujuan untuk membantu pemrogram menulis kode yang jelas dan logis untuk proyek skala kecil dan besar."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
huggan/pggan-celebahq-1024 | huggan | 2022-04-29T11:58:41Z | 0 | 0 | null | [
"pytorch",
"gan",
"pggan",
"huggan",
"unconditional-image-generation",
"license:apache-2.0",
"region:us"
] | unconditional-image-generation | 2022-04-17T19:15:25Z | ---
license: apache-2.0
tags:
- gan
- pggan
- huggan
- unconditional-image-generation
---
The model provided is a PGGAN generator trained on the celebahq dataset with a resolution of 1024px. It is uploaded as part of porting this project: https://github.com/genforce/sefa to hugginface spaces. |
doc2query/msmarco-hindi-mt5-base-v1 | doc2query | 2022-04-29T11:56:03Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"hi",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-29T11:55:47Z | ---
language: hi
datasets:
- unicamp-dl/mmarco
widget:
- text: "पाइथन एक सामान्य कार्यों के लिए उपयुक्त, उच्च स्तरीय प्रोग्रामिंग भाषा (General Purpose and High Level Programming language), इन्टरैक्टिव, ऑब्जेक्ट ओरिएन्टेड, स्क्रिप्टिंग भाषा है। इस भाषा को इस तरह से डिजाइन किया गया है ताकि इसमें लिखे गए कोड आसानी से पढ़े और समझे जा सकें।"
license: apache-2.0
---
# doc2query/msmarco-hindi-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-hindi-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "पाइथन एक सामान्य कार्यों के लिए उपयुक्त, उच्च स्तरीय प्रोग्रामिंग भाषा (General Purpose and High Level Programming language), इन्टरैक्टिव, ऑब्जेक्ट ओरिएन्टेड, स्क्रिप्टिंग भाषा है। इस भाषा को इस तरह से डिजाइन किया गया है ताकि इसमें लिखे गए कोड आसानी से पढ़े और समझे जा सकें।"
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
norefly/opus-mt-ko-en-finetuned-ko-to-en3 | norefly | 2022-04-29T11:48:26Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-29T04:28:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ko-en-finetuned-ko-to-en3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ko-en-finetuned-ko-to-en3
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1864
- Bleu: 0.7037
- Gen Len: 11.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 0.99 | 119 | 4.4541 | 0.0 | 5.0 |
| No log | 1.99 | 238 | 2.4214 | 0.3414 | 16.0 |
| No log | 2.99 | 357 | 2.2158 | 0.3212 | 15.0 |
| No log | 3.99 | 476 | 2.1737 | 0.3283 | 12.0 |
| 3.2958 | 4.99 | 595 | 2.1864 | 0.7037 | 11.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
doc2query/msmarco-arabic-mt5-base-v1 | doc2query | 2022-04-29T11:42:59Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"ar",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-29T11:42:40Z | ---
language: ar
datasets:
- unicamp-dl/mmarco
widget:
- text: "بايثون (بالإنجليزية: Python) هي لغة برمجة، عالية المستوى سهلة التعلم مفتوحة المصدر قابلة للتوسيع، تعتمد أسلوب البرمجة الكائنية (OOP). لغة بايثون هي لغة مُفسَّرة، ومُتعدِدة الاستخدامات، وتستخدم بشكل واسع في العديد من المجالات، كبناء البرامج المستقلة باستخدام الواجهات الرسومية وفي تطبيقات الويب، ويمكن استخدامها كلغة برمجة نصية للتحكم في أداء العديد من البرمجيات مثل بلندر. بشكل عام، يمكن استخدام بايثون لعمل البرامج البسيطة للمبتدئين، ولإنجاز المشاريع الضخمة في الوقت نفسه. غالباً ما يُنصح المبتدؤون في ميدان البرمجة بتعلم هذه اللغة لأنها من بين أسرع اللغات البرمجية تعلماً."
license: apache-2.0
---
# doc2query/msmarco-arabic-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-arabic-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "بايثون (بالإنجليزية: Python) هي لغة برمجة، عالية المستوى سهلة التعلم مفتوحة المصدر قابلة للتوسيع، تعتمد أسلوب البرمجة الكائنية (OOP). لغة بايثون هي لغة مُفسَّرة، ومُتعدِدة الاستخدامات، وتستخدم بشكل واسع في العديد من المجالات، كبناء البرامج المستقلة باستخدام الواجهات الرسومية وفي تطبيقات الويب، ويمكن استخدامها كلغة برمجة نصية للتحكم في أداء العديد من البرمجيات مثل بلندر. بشكل عام، يمكن استخدام بايثون لعمل البرامج البسيطة للمبتدئين، ولإنجاز المشاريع الضخمة في الوقت نفسه. غالباً ما يُنصح المبتدؤون في ميدان البرمجة بتعلم هذه اللغة لأنها من بين أسرع اللغات البرمجية تعلماً."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
ZZ99/deberta-v3-large-tapt | ZZ99 | 2022-04-29T09:24:11Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-04-26T09:27:01Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-mlm
This model is a fine-tuned version of [/root/autodl-tmp/nbme/tmp/test-mlm/deberta-v3-large-tapt](https://huggingface.co//root/autodl-tmp/nbme/tmp/test-mlm/deberta-v3-large-tapt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3251
- Accuracy: 0.7285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
doc2query/msmarco-german-mt5-base-v1 | doc2query | 2022-04-29T09:03:18Z | 20 | 6 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"de",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-29T08:49:21Z | ---
language: de
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python ist eine universelle, üblicherweise interpretierte, höhere Programmiersprache. Sie hat den Anspruch, einen gut lesbaren, knappen Programmierstil zu fördern. So werden beispielsweise Blöcke nicht durch geschweifte Klammern, sondern durch Einrückungen strukturiert."
license: apache-2.0
---
# doc2query/msmarco-german-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-german-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python ist eine universelle, üblicherweise interpretierte, höhere Programmiersprache. Sie hat den Anspruch, einen gut lesbaren, knappen Programmierstil zu fördern. So werden beispielsweise Blöcke nicht durch geschweifte Klammern, sondern durch Einrückungen strukturiert."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
Das282000Prit/fyp-finetuned-brown | Das282000Prit | 2022-04-29T06:41:10Z | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-04-29T06:15:15Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Das282000Prit/fyp-finetuned-brown
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Das282000Prit/fyp-finetuned-brown
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.5777
- Validation Loss: 3.0737
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -844, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5777 | 3.0737 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
NLPC-UOM/SinBERT-large | NLPC-UOM | 2022-04-29T05:05:04Z | 196 | 5 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"si",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language:
- si
license:
- mit
---
This is SinBERT-large model. SinBERT models are pretrained on a large Sinhala monolingual corpus (sin-cc-15M) using RoBERTa. If you use this model, please cite *BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, LREC 2022* |
NLPC-UOM/SinBERT-small | NLPC-UOM | 2022-04-29T05:04:13Z | 72 | 4 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"si",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
license: mit
language:
- si
---
This is SinBERT-small model. SinBERT models are pretrained on a large Sinhala monolingual corpus (sin-cc-15M) using RoBERTa. If you use this model, please cite *BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, LREC 2022*
|
bkh6722/xlsr-vorarlbergerisch | bkh6722 | 2022-04-29T04:45:04Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-29T02:50:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: wav2vec2-xlsr-vorarlbergerisch
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-vorarlbergerisch
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-german](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3193
- Wer: 0.3235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 62
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 15.6717 | 3.83 | 100 | 3.0247 | 1.0 |
| 2.485 | 7.68 | 200 | 1.5937 | 0.9046 |
| 0.784 | 11.53 | 300 | 1.2664 | 0.5 |
| 0.3689 | 15.38 | 400 | 1.2046 | 0.4696 |
| 0.2618 | 19.23 | 500 | 1.1289 | 0.4155 |
| 0.2088 | 23.08 | 600 | 0.9339 | 0.3623 |
| 0.1388 | 26.91 | 700 | 1.1448 | 0.3573 |
| 0.1042 | 30.75 | 800 | 1.1411 | 0.3606 |
| 0.0784 | 34.6 | 900 | 1.2046 | 0.3547 |
| 0.0607 | 38.45 | 1000 | 1.2243 | 0.3488 |
| 0.0459 | 42.3 | 1100 | 1.2387 | 0.3226 |
| 0.0273 | 46.15 | 1200 | 1.2123 | 0.3387 |
| 0.0195 | 49.98 | 1300 | 1.2232 | 0.3345 |
| 0.0188 | 53.83 | 1400 | 1.2656 | 0.3235 |
| 0.0132 | 57.68 | 1500 | 1.3377 | 0.3285 |
| 0.0089 | 61.53 | 1600 | 1.3193 | 0.3235 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
kimhieu/distilbert-base-uncased-finetuned-cola | kimhieu | 2022-04-29T03:44:47Z | 5 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-29T02:39:26Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: kimhieu/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kimhieu/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1828
- Validation Loss: 0.5520
- Train Matthews Correlation: 0.5286
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5184 | 0.4675 | 0.4484 | 0 |
| 0.3164 | 0.4646 | 0.4963 | 1 |
| 0.1828 | 0.5520 | 0.5286 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
chv5/t5-small-shuffled_take3-small | chv5 | 2022-04-29T03:26:41Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-28T18:06:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-shuffled_take3-small
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 11.883
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-shuffled_take3-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4505
- Rouge1: 11.883
- Rouge2: 9.4784
- Rougel: 10.9978
- Rougelsum: 11.5961
- Gen Len: 18.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:|
| 0.5205 | 1.0 | 34008 | 0.4505 | 11.883 | 9.4784 | 10.9978 | 11.5961 | 18.9834 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
bkh6722/wav2vec2-vorarlbergerisch | bkh6722 | 2022-04-29T02:50:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-28T22:14:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: wav2vec2-vorarlbergerisch
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-vorarlbergerisch
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9241
- Wer: 0.4358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 62
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.6837 | 3.83 | 100 | 3.7188 | 1.0 |
| 3.33 | 7.68 | 200 | 3.0620 | 1.0 |
| 2.9508 | 11.53 | 300 | 2.5915 | 1.0101 |
| 1.8954 | 15.38 | 400 | 1.6930 | 0.8243 |
| 1.231 | 19.23 | 500 | 1.7179 | 0.7551 |
| 0.9862 | 23.08 | 600 | 1.5237 | 0.6529 |
| 0.7353 | 26.91 | 700 | 1.5119 | 0.5921 |
| 0.5368 | 30.75 | 800 | 1.5011 | 0.5574 |
| 0.4448 | 34.6 | 900 | 1.5334 | 0.5363 |
| 0.3278 | 38.45 | 1000 | 1.7125 | 0.5144 |
| 0.2575 | 42.3 | 1100 | 1.6529 | 0.4958 |
| 0.1966 | 46.15 | 1200 | 1.7670 | 0.4848 |
| 0.1552 | 49.98 | 1300 | 1.7586 | 0.4620 |
| 0.1118 | 53.83 | 1400 | 1.7912 | 0.4417 |
| 0.0847 | 57.68 | 1500 | 1.8709 | 0.4443 |
| 0.0654 | 61.53 | 1600 | 1.9241 | 0.4358 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Rerare/distilbert-base-uncased-finetuned-cola | Rerare | 2022-04-29T02:19:11Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-28T12:36:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5291140309961344
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7643
- Matthews Correlation: 0.5291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5288 | 1.0 | 535 | 0.5111 | 0.4154 |
| 0.3546 | 2.0 | 1070 | 0.5285 | 0.4887 |
| 0.235 | 3.0 | 1605 | 0.5950 | 0.5153 |
| 0.1722 | 4.0 | 2140 | 0.7643 | 0.5291 |
| 0.1346 | 5.0 | 2675 | 0.8441 | 0.5185 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
obokkkk/mt5-base | obokkkk | 2022-04-29T02:04:16Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-28T05:42:00Z | ---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2760
- Bleu: 8.6707
- Gen Len: 16.9319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 183 | 1.4997 | 6.2141 | 17.0073 |
| No log | 2.0 | 366 | 1.3718 | 7.4647 | 16.9205 |
| 1.9408 | 3.0 | 549 | 1.3184 | 8.1938 | 16.8962 |
| 1.9408 | 4.0 | 732 | 1.2857 | 8.5265 | 16.9167 |
| 1.9408 | 5.0 | 915 | 1.2760 | 8.6707 | 16.9319 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
obokkkk/wav2vec2-base-960h-finetuned_common_voice3 | obokkkk | 2022-04-29T00:37:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-28T05:57:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-960h-finetuned_common_voice3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-960h-finetuned_common_voice3
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lilykaw/distilbert-base-uncased-finetuned-stsb | lilykaw | 2022-04-28T22:46:26Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-28T21:14:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: distilbert-base-uncased-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8651841336703003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-stsb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5634
- Pearson: 0.8680
- Spearmanr: 0.8652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| No log | 1.0 | 360 | 0.6646 | 0.8516 | 0.8494 |
| 1.0238 | 2.0 | 720 | 0.5617 | 0.8666 | 0.8637 |
| 0.3952 | 3.0 | 1080 | 0.6533 | 0.8649 | 0.8646 |
| 0.3952 | 4.0 | 1440 | 0.5889 | 0.8651 | 0.8625 |
| 0.2488 | 5.0 | 1800 | 0.5634 | 0.8680 | 0.8652 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AbhiNaiky/finetuning-sentiment-model-3000-samples | AbhiNaiky | 2022-04-28T22:34:39Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-28T22:16:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3170
- Accuracy: 0.8733
- F1: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Sathira/autotrain-mbtiNlp-798824628 | Sathira | 2022-04-28T22:09:14Z | 34 | 2 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain",
"en",
"dataset:Sathira/autotrain-data-mbtiNlp",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-28T21:01:33Z | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Sathira/autotrain-data-mbtiNlp
co2_eq_emissions: 121.67185089502216
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 798824628
- CO2 Emissions (in grams): 121.67185089502216
## Validation Metrics
- Loss: 0.5046824812889099
- Accuracy: 0.8472124039775673
- Macro F1: 0.7812978033330673
- Micro F1: 0.8472124039775673
- Weighted F1: 0.8464983956259307
- Macro Precision: 0.812208631055716
- Micro Precision: 0.8472124039775673
- Weighted Precision: 0.8478968364150775
- Macro Recall: 0.7593223884993787
- Micro Recall: 0.8472124039775673
- Weighted Recall: 0.8472124039775673
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Sathira/autotrain-mbtiNlp-798824628
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Sathira/autotrain-mbtiNlp-798824628", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Sathira/autotrain-mbtiNlp-798824628", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
dannytkn/bert-finetuned-squad | dannytkn | 2022-04-28T20:12:13Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-04-27T09:17:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.2
- Datasets 1.18.3
- Tokenizers 0.10.3
|
dccuchile/albert-xxlarge-spanish | dccuchile | 2022-04-28T19:56:15Z | 25 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"albert",
"pretraining",
"spanish",
"OpenCENIA",
"es",
"dataset:large_spanish_corpus",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
language:
- es
tags:
- albert
- spanish
- OpenCENIA
datasets:
- large_spanish_corpus
---
# ALBERT XXLarge Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.0003125
- Batch Size: 128
- Warmup ratio: 0.00078125
- Warmup steps: 3125
- Goal steps: 4000000
- Total steps: 1650000
- Total training time (aprox): 70.7 days.
## Training loss
 |
dccuchile/albert-large-spanish | dccuchile | 2022-04-28T19:55:20Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"albert",
"pretraining",
"spanish",
"OpenCENIA",
"es",
"dataset:large_spanish_corpus",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
language:
- es
tags:
- albert
- spanish
- OpenCENIA
datasets:
- large_spanish_corpus
---
# ALBERT Large Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.000625
- Batch Size: 512
- Warmup ratio: 0.003125
- Warmup steps: 12500
- Goal steps: 4000000
- Total steps: 1450000
- Total training time (aprox): 42 days.
## Training loss

|
dccuchile/albert-base-spanish | dccuchile | 2022-04-28T19:55:01Z | 246 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"albert",
"pretraining",
"spanish",
"OpenCENIA",
"es",
"dataset:large_spanish_corpus",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
language:
- es
tags:
- albert
- spanish
- OpenCENIA
datasets:
- large_spanish_corpus
---
# ALBERT Base Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.0008838834765
- Batch Size: 960
- Warmup ratio: 0.00625
- Warmup steps: 53333.33333
- Goal steps: 8533333.333
- Total steps: 3650000
- Total training time (aprox): 70.4 days.
## Training loss
 |
dccuchile/albert-tiny-spanish | dccuchile | 2022-04-28T19:54:10Z | 11 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"albert",
"pretraining",
"spanish",
"OpenCENIA",
"es",
"dataset:large_spanish_corpus",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
language:
- es
tags:
- albert
- spanish
- OpenCENIA
datasets:
- large_spanish_corpus
---
# ALBERT Tiny Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.00125
- Batch Size: 2048
- Warmup ratio: 0.0125
- Warmup steps: 125000
- Goal steps: 10000000
- Total steps: 8300000
- Total training time (aprox): 58.2 days
## Training loss
 |
princeton-nlp/efficient_mlm_m0.50 | princeton-nlp | 2022-04-28T18:58:09Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.08005",
"autotrain_compatible",
"region:us"
] | fill-mask | 2022-04-28T15:28:20Z | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
princeton-nlp/efficient_mlm_m0.60 | princeton-nlp | 2022-04-28T18:58:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.08005",
"autotrain_compatible",
"region:us"
] | fill-mask | 2022-04-28T15:28:27Z | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
princeton-nlp/efficient_mlm_m0.70 | princeton-nlp | 2022-04-28T18:57:57Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.08005",
"autotrain_compatible",
"region:us"
] | fill-mask | 2022-04-28T15:28:36Z | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
radames/PIFu-upright-standing | radames | 2022-04-28T18:31:23Z | 0 | 3 | null | [
"license:mit",
"region:us"
] | null | 2022-04-22T23:41:52Z | ---
license: mit
---
# PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization
<a href="https://github.com/shunsukesaito/PIFu" target="_blank">https://github.com/shunsukesaito/PIFu</a>
This a checkpoint from the original project here are some important <a href="https://github.com/shunsukesaito/PIFu#demo" target="_blank">notes</a>:
> Warning: The released model is trained with mostly upright standing scans with weak perspectie projection and the pitch angle of 0 degree. Reconstruction quality may degrade for images highly deviated from trainining data.
```
@InProceedings{saito2019pifu,
author = {Saito, Shunsuke and Huang, Zeng and Natsume, Ryota and Morishima, Shigeo and Kanazawa, Angjoo and Li, Hao},
title = {PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}
```
|
faisalahmad/summarizer2 | faisalahmad | 2022-04-28T17:48:14Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain",
"en",
"dataset:faisalahmad/autotrain-data-nsut-nlp-project-textsummarization",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-27T09:09:24Z | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- faisalahmad/autotrain-data-nsut-nlp-project-textsummarization
co2_eq_emissions: 4444.804304528572
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 791824381
- CO2 Emissions (in grams): 4444.804304528572
## Validation Metrics
- Loss: 1.4599040746688843
- Rouge1: 46.5461
- Rouge2: 23.8595
- RougeL: 38.526
- RougeLsum: 38.5219
- Gen Len: 23.468
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/faisalahmad/autotrain-nsut-nlp-project-textsummarization-791824381
``` |
Slavka/distil-bert-finetuned-log-parser-winlogbeat | Slavka | 2022-04-28T17:46:08Z | 6 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-04-26T21:43:08Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distil-bert-finetuned-log-parser-winlogbeat
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distil-bert-finetuned-log-parser-winlogbeat
This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
juancavallotti/roberta-base-culinary-finetuned | juancavallotti | 2022-04-28T17:42:59Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-28T17:06:44Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: roberta-base-culinary-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-culinary-finetuned
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0657
- F1: 0.9929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1803 | 0.11 | 500 | 0.1939 | 0.9611 |
| 0.1543 | 0.22 | 1000 | 0.1364 | 0.9669 |
| 0.1213 | 0.32 | 1500 | 0.1487 | 0.9728 |
| 0.1079 | 0.43 | 2000 | 0.0855 | 0.9773 |
| 0.0975 | 0.54 | 2500 | 0.0844 | 0.9831 |
| 0.0855 | 0.65 | 3000 | 0.0785 | 0.9831 |
| 0.0844 | 0.76 | 3500 | 0.0679 | 0.9857 |
| 0.0793 | 0.86 | 4000 | 0.0489 | 0.9890 |
| 0.0864 | 0.97 | 4500 | 0.0399 | 0.9903 |
| 0.049 | 1.08 | 5000 | 0.0528 | 0.9890 |
| 0.0353 | 1.19 | 5500 | 0.0635 | 0.9877 |
| 0.0321 | 1.3 | 6000 | 0.0542 | 0.9903 |
| 0.0311 | 1.41 | 6500 | 0.0559 | 0.9896 |
| 0.0315 | 1.51 | 7000 | 0.0736 | 0.9857 |
| 0.04 | 1.62 | 7500 | 0.0648 | 0.9909 |
| 0.0265 | 1.73 | 8000 | 0.0608 | 0.9909 |
| 0.0443 | 1.84 | 8500 | 0.0617 | 0.9883 |
| 0.0443 | 1.95 | 9000 | 0.0555 | 0.9896 |
| 0.0235 | 2.05 | 9500 | 0.0608 | 0.9903 |
| 0.0139 | 2.16 | 10000 | 0.0613 | 0.9922 |
| 0.0126 | 2.27 | 10500 | 0.0739 | 0.9903 |
| 0.0164 | 2.38 | 11000 | 0.0679 | 0.9903 |
| 0.0172 | 2.49 | 11500 | 0.0606 | 0.9922 |
| 0.0175 | 2.59 | 12000 | 0.0442 | 0.9942 |
| 0.01 | 2.7 | 12500 | 0.0661 | 0.9916 |
| 0.0059 | 2.81 | 13000 | 0.0659 | 0.9929 |
| 0.0216 | 2.92 | 13500 | 0.0504 | 0.9929 |
| 0.0123 | 3.03 | 14000 | 0.0584 | 0.9929 |
| 0.0047 | 3.14 | 14500 | 0.0573 | 0.9929 |
| 0.0123 | 3.24 | 15000 | 0.0511 | 0.9935 |
| 0.0027 | 3.35 | 15500 | 0.0579 | 0.9942 |
| 0.0025 | 3.46 | 16000 | 0.0602 | 0.9935 |
| 0.0051 | 3.57 | 16500 | 0.0598 | 0.9935 |
| 0.0044 | 3.68 | 17000 | 0.0617 | 0.9929 |
| 0.0061 | 3.78 | 17500 | 0.0634 | 0.9935 |
| 0.0048 | 3.89 | 18000 | 0.0672 | 0.9929 |
| 0.0078 | 4.0 | 18500 | 0.0657 | 0.9929 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
stevems1/bert-base-uncased-ShreeGanesh | stevems1 | 2022-04-28T15:16:26Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-04-28T14:19:50Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-ShreeGanesh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-ShreeGanesh
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
aakarshan/autotrain-Question-translation-797524592 | aakarshan | 2022-04-28T14:48:38Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"translation",
"en",
"hi",
"dataset:aakarshan/autotrain-data-Question-translation",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-04-28T14:26:14Z | ---
tags:
- autotrain
- translation
language:
- en
- hi
datasets:
- aakarshan/autotrain-data-Question-translation
co2_eq_emissions: 27.564419884224776
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 797524592
- CO2 Emissions (in grams): 27.564419884224776
## Validation Metrics
- Loss: 2.2697999477386475
- SacreBLEU: 14.9797
- Gen len: 13.7071 |
nlpaueb/sec-bert-shape | nlpaueb | 2022-04-28T14:46:51Z | 45 | 14 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"finance",
"financial",
"fill-mask",
"en",
"arxiv:2203.06482",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: en
pipeline_tag: fill-mask
license: cc-by-sa-4.0
thumbnail: https://i.ibb.co/0yz81K9/sec-bert-logo.png
tags:
- finance
- financial
widget:
- text: "Total net sales decreased [MASK]% or $[X.X] billion during [XXXX] compared to [XXXX]"
- text: "Total net sales decreased [X]% or $[MASK] billion during [XXXX] compared to [XXXX]."
- text: "Total net sales decreased [X]% or $[X.X] billion during [MASK] compared to [XXXX]."
- text: "During [MASK], the Company repurchased $[XX.X] billion of its common stock and paid dividend equivalents of $[XX.X] billion."
- text: "During 2019, the Company repurchased $[MASK] billion of its common stock and paid dividend equivalents of $[XX.X] billion."
---
# SEC-BERT
<img align="center" src="https://i.ibb.co/0yz81K9/sec-bert-logo.png" alt="sec-bert-logo" width="400"/>
<div style="text-align: justify">
SEC-BERT is a family of BERT models for the financial domain, intended to assist financial NLP research and FinTech applications.
SEC-BERT consists of the following models:
* [**SEC-BERT-BASE**](https://huggingface.co/nlpaueb/sec-bert-base): Same architecture as BERT-BASE trained on financial documents.
* [**SEC-BERT-NUM**](https://huggingface.co/nlpaueb/sec-bert-num): Same as SEC-BERT-BASE but we replace every number token with a [NUM] pseudo-token handling all numeric expressions in a uniform manner, disallowing their fragmentation)
* **SEC-BERT-SHAPE** (this model): Same as SEC-BERT-BASE but we replace numbers with pseudo-tokens that represent the number’s shape, so numeric expressions (of known shapes) are no longer fragmented, e.g., '53.2' becomes '[XX.X]' and '40,200.5' becomes '[XX,XXX.X]'.
</div>
## Pre-training corpus
The model was pre-trained on 260,773 10-K filings from 1993-2019, publicly available at <a href="https://www.sec.gov/">U.S. Securities and Exchange Commission (SEC)</a>
## Pre-training details
<div style="text-align: justify">
* We created a new vocabulary of 30k subwords by training a [BertWordPieceTokenizer](https://github.com/huggingface/tokenizers) from scratch on the pre-training corpus.
* We trained BERT using the official code provided in [Google BERT's GitHub repository](https://github.com/google-research/bert)</a>.
* We then used [Hugging Face](https://huggingface.co)'s [Transformers](https://github.com/huggingface/transformers) conversion script to convert the TF checkpoint in the desired format in order to be able to load the model in two lines of code for both PyTorch and TF2 users.
* We release a model similar to the English BERT-BASE model (12-layer, 768-hidden, 12-heads, 110M parameters).
* We chose to follow the same training set-up: 1 million training steps with batches of 256 sequences of length 512 with an initial learning rate 1e-4.
* We were able to use a single Google Cloud TPU v3-8 provided for free from [TensorFlow Research Cloud (TRC)](https://sites.research.google/trc), while also utilizing [GCP research credits](https://edu.google.com/programs/credits/research). Huge thanks to both Google programs for supporting us!
</div>
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/sec-bert-shape")
model = AutoModel.from_pretrained("nlpaueb/sec-bert-shape")
```
## Pre-process Text
<div style="text-align: justify">
To use SEC-BERT-SHAPE, you have to pre-process texts replacing every numerical token with the corresponding shape pseudo-token, from a list of 214 predefined shape pseudo-tokens. If the numerical token does not correspond to any shape pseudo-token we replace it with the [NUM] pseudo-token.
Below there is an example of how you can pre-process a simple sentence. This approach is quite simple; feel free to modify it as you see fit.
</div>
```python
import re
import spacy
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/sec-bert-shape")
spacy_tokenizer = spacy.load("en_core_web_sm")
sentence = "Total net sales decreased 2% or $5.4 billion during 2019 compared to 2018."
def sec_bert_shape_preprocess(text):
tokens = [t.text for t in spacy_tokenizer(sentence)]
processed_text = []
for token in tokens:
if re.fullmatch(r"(\d+[\d,.]*)|([,.]\d+)", token):
shape = '[' + re.sub(r'\d', 'X', token) + ']'
if shape in tokenizer.additional_special_tokens:
processed_text.append(shape)
else:
processed_text.append('[NUM]')
else:
processed_text.append(token)
return ' '.join(processed_text)
tokenized_sentence = tokenizer.tokenize(sec_bert_shape_preprocess(sentence))
print(tokenized_sentence)
"""
['total', 'net', 'sales', 'decreased', '[X]', '%', 'or', '$', '[X.X]', 'billion', 'during', '[XXXX]', 'compared', 'to', '[XXXX]', '.']
"""
```
## Using SEC-BERT variants as Language Models
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales [MASK] 2% or $5.4 billion during 2019 compared to 2018. | decreased
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | increased (0.221), were (0.131), are (0.103), rose (0.075), of (0.058)
| **SEC-BERT-BASE** | increased (0.678), decreased (0.282), declined (0.017), grew (0.016), rose (0.004)
| **SEC-BERT-NUM** | increased (0.753), decreased (0.211), grew (0.019), declined (0.010), rose (0.006)
| **SEC-BERT-SHAPE** | increased (0.747), decreased (0.214), grew (0.021), declined (0.013), rose (0.002)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $5.4 [MASK] during 2019 compared to 2018. | billion
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | billion (0.841), million (0.097), trillion (0.028), ##m (0.015), ##bn (0.006)
| **SEC-BERT-BASE** | million (0.972), billion (0.028), millions (0.000), ##million (0.000), m (0.000)
| **SEC-BERT-NUM** | million (0.974), billion (0.012), , (0.010), thousand (0.003), m (0.000)
| **SEC-BERT-SHAPE** | million (0.978), billion (0.021), % (0.000), , (0.000), millions (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased [MASK]% or $5.4 billion during 2019 compared to 2018. | 2
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 20 (0.031), 10 (0.030), 6 (0.029), 4 (0.027), 30 (0.027)
| **SEC-BERT-BASE** | 13 (0.045), 12 (0.040), 11 (0.040), 14 (0.035), 10 (0.035)
| **SEC-BERT-NUM** | [NUM] (1.000), one (0.000), five (0.000), three (0.000), seven (0.000)
| **SEC-BERT-SHAPE** | [XX] (0.316), [XX.X] (0.253), [X.X] (0.237), [X] (0.188), [X.XX] (0.002)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2[MASK] or $5.4 billion during 2019 compared to 2018. | %
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | % (0.795), percent (0.174), ##fold (0.009), billion (0.004), times (0.004)
| **SEC-BERT-BASE** | % (0.924), percent (0.076), points (0.000), , (0.000), times (0.000)
| **SEC-BERT-NUM** | % (0.882), percent (0.118), million (0.000), units (0.000), bps (0.000)
| **SEC-BERT-SHAPE** | % (0.961), percent (0.039), bps (0.000), , (0.000), bcf (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $[MASK] billion during 2019 compared to 2018. | 5.4
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 1 (0.074), 4 (0.045), 3 (0.044), 2 (0.037), 5 (0.034)
| **SEC-BERT-BASE** | 1 (0.218), 2 (0.136), 3 (0.078), 4 (0.066), 5 (0.048)
| **SEC-BERT-NUM** | [NUM] (1.000), l (0.000), 1 (0.000), - (0.000), 30 (0.000)
| **SEC-BERT-SHAPE** | [X.X] (0.787), [X.XX] (0.095), [XX.X] (0.049), [X.XXX] (0.046), [X] (0.013)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $5.4 billion during [MASK] compared to 2018. | 2019
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 2017 (0.485), 2018 (0.169), 2016 (0.164), 2015 (0.070), 2014 (0.022)
| **SEC-BERT-BASE** | 2019 (0.990), 2017 (0.007), 2018 (0.003), 2020 (0.000), 2015 (0.000)
| **SEC-BERT-NUM** | [NUM] (1.000), as (0.000), fiscal (0.000), year (0.000), when (0.000)
| **SEC-BERT-SHAPE** | [XXXX] (1.000), as (0.000), year (0.000), periods (0.000), , (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $5.4 billion during 2019 compared to [MASK]. | 2018
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 2017 (0.100), 2016 (0.097), above (0.054), inflation (0.050), previously (0.037)
| **SEC-BERT-BASE** | 2018 (0.999), 2019 (0.000), 2017 (0.000), 2016 (0.000), 2014 (0.000)
| **SEC-BERT-NUM** | [NUM] (1.000), year (0.000), last (0.000), sales (0.000), fiscal (0.000)
| **SEC-BERT-SHAPE** | [XXXX] (1.000), year (0.000), sales (0.000), prior (0.000), years (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company [MASK] $67.1 billion of its common stock and paid dividend equivalents of $14.1 billion. | repurchased
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | held (0.229), sold (0.192), acquired (0.172), owned (0.052), traded (0.033)
| **SEC-BERT-BASE** | repurchased (0.913), issued (0.036), purchased (0.029), redeemed (0.010), sold (0.003)
| **SEC-BERT-NUM** | repurchased (0.917), purchased (0.054), reacquired (0.013), issued (0.005), acquired (0.003)
| **SEC-BERT-SHAPE** | repurchased (0.902), purchased (0.068), issued (0.010), reacquired (0.008), redeemed (0.006)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company repurchased $67.1 billion of its common [MASK] and paid dividend equivalents of $14.1 billion. | stock
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | stock (0.835), assets (0.039), equity (0.025), debt (0.021), bonds (0.017)
| **SEC-BERT-BASE** | stock (0.857), shares (0.135), equity (0.004), units (0.002), securities (0.000)
| **SEC-BERT-NUM** | stock (0.842), shares (0.157), equity (0.000), securities (0.000), units (0.000)
| **SEC-BERT-SHAPE** | stock (0.888), shares (0.109), equity (0.001), securities (0.001), stocks (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company repurchased $67.1 billion of its common stock and paid [MASK] equivalents of $14.1 billion. | dividend
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | cash (0.276), net (0.128), annual (0.083), the (0.040), debt (0.027)
| **SEC-BERT-BASE** | dividend (0.890), cash (0.018), dividends (0.016), share (0.013), tax (0.010)
| **SEC-BERT-NUM** | dividend (0.735), cash (0.115), share (0.087), tax (0.025), stock (0.013)
| **SEC-BERT-SHAPE** | dividend (0.655), cash (0.248), dividends (0.042), share (0.019), out (0.003)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company repurchased $67.1 billion of its common stock and paid dividend [MASK] of $14.1 billion. | equivalents
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | revenue (0.085), earnings (0.078), rates (0.065), amounts (0.064), proceeds (0.062)
| **SEC-BERT-BASE** | payments (0.790), distributions (0.087), equivalents (0.068), cash (0.013), amounts (0.004)
| **SEC-BERT-NUM** | payments (0.845), equivalents (0.097), distributions (0.024), increases (0.005), dividends (0.004)
| **SEC-BERT-SHAPE** | payments (0.784), equivalents (0.093), distributions (0.043), dividends (0.015), requirements (0.009)
## Publication
<div style="text-align: justify">
If you use this model cite the following article:<br>
[**FiNER: Financial Numeric Entity Recognition for XBRL Tagging**](https://arxiv.org/abs/2203.06482)<br>
Lefteris Loukas, Manos Fergadiotis, Ilias Chalkidis, Eirini Spyropoulou, Prodromos Malakasiotis, Ion Androutsopoulos and George Paliouras<br>
In the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022) (Long Papers), Dublin, Republic of Ireland, May 22 - 27, 2022
</div>
```
@inproceedings{loukas-etal-2022-finer,
title = {FiNER: Financial Numeric Entity Recognition for XBRL Tagging},
author = {Loukas, Lefteris and
Fergadiotis, Manos and
Chalkidis, Ilias and
Spyropoulou, Eirini and
Malakasiotis, Prodromos and
Androutsopoulos, Ion and
Paliouras George},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022)},
publisher = {Association for Computational Linguistics},
location = {Dublin, Republic of Ireland},
year = {2022},
url = {https://arxiv.org/abs/2203.06482}
}
```
## About Us
<div style="text-align: justify">
[AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
The group's current research interests include:
* question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,
* natural language generation from databases and ontologies, especially Semantic Web ontologies,
text classification, including filtering spam and abusive content,
* information extraction and opinion mining, including legal text analytics and sentiment analysis,
* natural language processing tools for Greek, for example parsers and named-entity recognizers,
machine learning in natural language processing, especially deep learning.
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
</div>
[Manos Fergadiotis](https://manosfer.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) |
nlpaueb/sec-bert-num | nlpaueb | 2022-04-28T14:46:16Z | 27 | 6 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"finance",
"financial",
"fill-mask",
"en",
"arxiv:2203.06482",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: en
pipeline_tag: fill-mask
license: cc-by-sa-4.0
thumbnail: https://i.ibb.co/0yz81K9/sec-bert-logo.png
tags:
- finance
- financial
widget:
- text: "Total net sales decreased [MASK]% or $[NUM] billion during [NUM] compared to [NUM]."
- text: "Total net sales decreased [NUM]% or $[MASK] billion during [NUM] compared to [NUM]."
- text: "Total net sales decreased [NUM]% or $[NUM] billion during [MASK] compared to [NUM]."
- text: "During [MASK], the Company repurchased $[NUM] billion of its common stock and paid dividend equivalents of $[NUM] billion."
- text: "During 2019, the Company repurchased $[MASK] billion of its common stock and paid dividend equivalents of $[NUM] billion."
---
# SEC-BERT
<img align="center" src="https://i.ibb.co/0yz81K9/sec-bert-logo.png" alt="sec-bert-logo" width="400"/>
<div style="text-align: justify">
SEC-BERT is a family of BERT models for the financial domain, intended to assist financial NLP research and FinTech applications.
SEC-BERT consists of the following models:
* [**SEC-BERT-BASE**](https://huggingface.co/nlpaueb/sec-bert-base): Same architecture as BERT-BASE trained on financial documents.
* **SEC-BERT-NUM** (this model): Same as SEC-BERT-BASE but we replace every number token with a [NUM] pseudo-token handling all numeric expressions in a uniform manner, disallowing their fragmentation).
* [**SEC-BERT-SHAPE**](https://huggingface.co/nlpaueb/sec-bert-shape): Same as SEC-BERT-BASE but we replace numbers with pseudo-tokens that represent the number’s shape, so numeric expressions (of known shapes) are no longer fragmented, e.g., '53.2' becomes '[XX.X]' and '40,200.5' becomes '[XX,XXX.X]'.
</div>
## Pre-training corpus
The model was pre-trained on 260,773 10-K filings from 1993-2019, publicly available at <a href="https://www.sec.gov/">U.S. Securities and Exchange Commission (SEC)</a>
## Pre-training details
<div style="text-align: justify">
* We created a new vocabulary of 30k subwords by training a [BertWordPieceTokenizer](https://github.com/huggingface/tokenizers) from scratch on the pre-training corpus.
* We trained BERT using the official code provided in [Google BERT's GitHub repository](https://github.com/google-research/bert)</a>.
* We then used [Hugging Face](https://huggingface.co)'s [Transformers](https://github.com/huggingface/transformers) conversion script to convert the TF checkpoint in the desired format in order to be able to load the model in two lines of code for both PyTorch and TF2 users.
* We release a model similar to the English BERT-BASE model (12-layer, 768-hidden, 12-heads, 110M parameters).
* We chose to follow the same training set-up: 1 million training steps with batches of 256 sequences of length 512 with an initial learning rate 1e-4.
* We were able to use a single Google Cloud TPU v3-8 provided for free from [TensorFlow Research Cloud (TRC)](https://sites.research.google/trc), while also utilizing [GCP research credits](https://edu.google.com/programs/credits/research). Huge thanks to both Google programs for supporting us!
</div>
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/sec-bert-num")
model = AutoModel.from_pretrained("nlpaueb/sec-bert-num")
```
## Pre-process Text
<div style="text-align: justify">
To use SEC-BERT-NUM, you have to pre-process texts replacing every numerical token with [NUM] pseudo-token.
Below there is an example of how you can pre-process a simple sentence. This approach is quite simple; feel free to modify it as you see fit.
</div>
```python
import re
import spacy
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/sec-bert-num")
spacy_tokenizer = spacy.load("en_core_web_sm")
sentence = "Total net sales decreased 2% or $5.4 billion during 2019 compared to 2018."
def sec_bert_num_preprocess(text):
tokens = [t.text for t in spacy_tokenizer(text)]
processed_text = []
for token in tokens:
if re.fullmatch(r"(\d+[\d,.]*)|([,.]\d+)", token):
processed_text.append('[NUM]')
else:
processed_text.append(token)
return ' '.join(processed_text)
tokenized_sentence = tokenizer.tokenize(sec_bert_num_preprocess(sentence))
print(tokenized_sentence)
"""
['total', 'net', 'sales', 'decreased', '[NUM]', '%', 'or', '$', '[NUM]', 'billion', 'during', '[NUM]', 'compared', 'to', '[NUM]', '.']
"""
```
## Using SEC-BERT variants as Language Models
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales [MASK] 2% or $5.4 billion during 2019 compared to 2018. | decreased
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | increased (0.221), were (0.131), are (0.103), rose (0.075), of (0.058)
| **SEC-BERT-BASE** | increased (0.678), decreased (0.282), declined (0.017), grew (0.016), rose (0.004)
| **SEC-BERT-NUM** | increased (0.753), decreased (0.211), grew (0.019), declined (0.010), rose (0.006)
| **SEC-BERT-SHAPE** | increased (0.747), decreased (0.214), grew (0.021), declined (0.013), rose (0.002)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $5.4 [MASK] during 2019 compared to 2018. | billion
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | billion (0.841), million (0.097), trillion (0.028), ##m (0.015), ##bn (0.006)
| **SEC-BERT-BASE** | million (0.972), billion (0.028), millions (0.000), ##million (0.000), m (0.000)
| **SEC-BERT-NUM** | million (0.974), billion (0.012), , (0.010), thousand (0.003), m (0.000)
| **SEC-BERT-SHAPE** | million (0.978), billion (0.021), % (0.000), , (0.000), millions (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased [MASK]% or $5.4 billion during 2019 compared to 2018. | 2
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 20 (0.031), 10 (0.030), 6 (0.029), 4 (0.027), 30 (0.027)
| **SEC-BERT-BASE** | 13 (0.045), 12 (0.040), 11 (0.040), 14 (0.035), 10 (0.035)
| **SEC-BERT-NUM** | [NUM] (1.000), one (0.000), five (0.000), three (0.000), seven (0.000)
| **SEC-BERT-SHAPE** | [XX] (0.316), [XX.X] (0.253), [X.X] (0.237), [X] (0.188), [X.XX] (0.002)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2[MASK] or $5.4 billion during 2019 compared to 2018. | %
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | % (0.795), percent (0.174), ##fold (0.009), billion (0.004), times (0.004)
| **SEC-BERT-BASE** | % (0.924), percent (0.076), points (0.000), , (0.000), times (0.000)
| **SEC-BERT-NUM** | % (0.882), percent (0.118), million (0.000), units (0.000), bps (0.000)
| **SEC-BERT-SHAPE** | % (0.961), percent (0.039), bps (0.000), , (0.000), bcf (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $[MASK] billion during 2019 compared to 2018. | 5.4
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 1 (0.074), 4 (0.045), 3 (0.044), 2 (0.037), 5 (0.034)
| **SEC-BERT-BASE** | 1 (0.218), 2 (0.136), 3 (0.078), 4 (0.066), 5 (0.048)
| **SEC-BERT-NUM** | [NUM] (1.000), l (0.000), 1 (0.000), - (0.000), 30 (0.000)
| **SEC-BERT-SHAPE** | [X.X] (0.787), [X.XX] (0.095), [XX.X] (0.049), [X.XXX] (0.046), [X] (0.013)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $5.4 billion during [MASK] compared to 2018. | 2019
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 2017 (0.485), 2018 (0.169), 2016 (0.164), 2015 (0.070), 2014 (0.022)
| **SEC-BERT-BASE** | 2019 (0.990), 2017 (0.007), 2018 (0.003), 2020 (0.000), 2015 (0.000)
| **SEC-BERT-NUM** | [NUM] (1.000), as (0.000), fiscal (0.000), year (0.000), when (0.000)
| **SEC-BERT-SHAPE** | [XXXX] (1.000), as (0.000), year (0.000), periods (0.000), , (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $5.4 billion during 2019 compared to [MASK]. | 2018
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 2017 (0.100), 2016 (0.097), above (0.054), inflation (0.050), previously (0.037)
| **SEC-BERT-BASE** | 2018 (0.999), 2019 (0.000), 2017 (0.000), 2016 (0.000), 2014 (0.000)
| **SEC-BERT-NUM** | [NUM] (1.000), year (0.000), last (0.000), sales (0.000), fiscal (0.000)
| **SEC-BERT-SHAPE** | [XXXX] (1.000), year (0.000), sales (0.000), prior (0.000), years (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company [MASK] $67.1 billion of its common stock and paid dividend equivalents of $14.1 billion. | repurchased
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | held (0.229), sold (0.192), acquired (0.172), owned (0.052), traded (0.033)
| **SEC-BERT-BASE** | repurchased (0.913), issued (0.036), purchased (0.029), redeemed (0.010), sold (0.003)
| **SEC-BERT-NUM** | repurchased (0.917), purchased (0.054), reacquired (0.013), issued (0.005), acquired (0.003)
| **SEC-BERT-SHAPE** | repurchased (0.902), purchased (0.068), issued (0.010), reacquired (0.008), redeemed (0.006)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company repurchased $67.1 billion of its common [MASK] and paid dividend equivalents of $14.1 billion. | stock
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | stock (0.835), assets (0.039), equity (0.025), debt (0.021), bonds (0.017)
| **SEC-BERT-BASE** | stock (0.857), shares (0.135), equity (0.004), units (0.002), securities (0.000)
| **SEC-BERT-NUM** | stock (0.842), shares (0.157), equity (0.000), securities (0.000), units (0.000)
| **SEC-BERT-SHAPE** | stock (0.888), shares (0.109), equity (0.001), securities (0.001), stocks (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company repurchased $67.1 billion of its common stock and paid [MASK] equivalents of $14.1 billion. | dividend
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | cash (0.276), net (0.128), annual (0.083), the (0.040), debt (0.027)
| **SEC-BERT-BASE** | dividend (0.890), cash (0.018), dividends (0.016), share (0.013), tax (0.010)
| **SEC-BERT-NUM** | dividend (0.735), cash (0.115), share (0.087), tax (0.025), stock (0.013)
| **SEC-BERT-SHAPE** | dividend (0.655), cash (0.248), dividends (0.042), share (0.019), out (0.003)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company repurchased $67.1 billion of its common stock and paid dividend [MASK] of $14.1 billion. | equivalents
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | revenue (0.085), earnings (0.078), rates (0.065), amounts (0.064), proceeds (0.062)
| **SEC-BERT-BASE** | payments (0.790), distributions (0.087), equivalents (0.068), cash (0.013), amounts (0.004)
| **SEC-BERT-NUM** | payments (0.845), equivalents (0.097), distributions (0.024), increases (0.005), dividends (0.004)
| **SEC-BERT-SHAPE** | payments (0.784), equivalents (0.093), distributions (0.043), dividends (0.015), requirements (0.009)
## Publication
<div style="text-align: justify">
If you use this model cite the following article:<br>
[**FiNER: Financial Numeric Entity Recognition for XBRL Tagging**](https://arxiv.org/abs/2203.06482)<br>
Lefteris Loukas, Manos Fergadiotis, Ilias Chalkidis, Eirini Spyropoulou, Prodromos Malakasiotis, Ion Androutsopoulos and George Paliouras<br>
In the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022) (Long Papers), Dublin, Republic of Ireland, May 22 - 27, 2022
</div>
```
@inproceedings{loukas-etal-2022-finer,
title = {FiNER: Financial Numeric Entity Recognition for XBRL Tagging},
author = {Loukas, Lefteris and
Fergadiotis, Manos and
Chalkidis, Ilias and
Spyropoulou, Eirini and
Malakasiotis, Prodromos and
Androutsopoulos, Ion and
Paliouras George},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022)},
publisher = {Association for Computational Linguistics},
location = {Dublin, Republic of Ireland},
year = {2022},
url = {https://arxiv.org/abs/2203.06482}
}
```
## About Us
<div style="text-align: justify">
[AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
The group's current research interests include:
* question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,
* natural language generation from databases and ontologies, especially Semantic Web ontologies,
text classification, including filtering spam and abusive content,
* information extraction and opinion mining, including legal text analytics and sentiment analysis,
* natural language processing tools for Greek, for example parsers and named-entity recognizers,
machine learning in natural language processing, especially deep learning.
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
</div>
[Manos Fergadiotis](https://manosfer.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) |
nlpaueb/bert-base-uncased-contracts | nlpaueb | 2022-04-28T14:43:56Z | 44,445 | 26 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"legal",
"fill-mask",
"en",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: en
pipeline_tag: fill-mask
license: cc-by-sa-4.0
thumbnail: https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png
tags:
- legal
widget:
- text: "This [MASK] Agreement is between General Motors and John Murray."
---
# LEGAL-BERT: The Muppets straight out of Law School
<img align="left" src="https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png" width="100"/>
LEGAL-BERT is a family of BERT models for the legal domain, intended to assist legal NLP research, computational law, and legal technology applications. To pre-train the different variations of LEGAL-BERT, we collected 12 GB of diverse English legal text from several fields (e.g., legislation, court cases, contracts) scraped from publicly available resources. Sub-domain variants (CONTRACTS-, EURLEX-, ECHR-) and/or general LEGAL-BERT perform better than using BERT out of the box for domain-specific tasks.<br>
This is the sub-domain variant pre-trained on US contracts.
<br/><br/>
---
I. Chalkidis, M. Fergadiotis, P. Malakasiotis, N. Aletras and I. Androutsopoulos. "LEGAL-BERT: The Muppets straight out of Law School". In Findings of Empirical Methods in Natural Language Processing (EMNLP 2020) (Short Papers), to be held online, 2020. (https://aclanthology.org/2020.findings-emnlp.261)
---
## Pre-training corpora
The pre-training corpora of LEGAL-BERT include:
* 116,062 documents of EU legislation, publicly available from EURLEX (http://eur-lex.europa.eu), the repository of EU Law running under the EU Publication Office.
* 61,826 documents of UK legislation, publicly available from the UK legislation portal (http://www.legislation.gov.uk).
* 19,867 cases from the European Court of Justice (ECJ), also available from EURLEX.
* 12,554 cases from HUDOC, the repository of the European Court of Human Rights (ECHR) (http://hudoc.echr.coe.int/eng).
* 164,141 cases from various courts across the USA, hosted in the Case Law Access Project portal (https://case.law).
* 76,366 US contracts from EDGAR, the database of US Securities and Exchange Commission (SECOM) (https://www.sec.gov/edgar.shtml).
## Pre-training details
* We trained BERT using the official code provided in Google BERT's GitHub repository (https://github.com/google-research/bert).
* We released a model similar to the English BERT-BASE model (12-layer, 768-hidden, 12-heads, 110M parameters).
* We chose to follow the same training set-up: 1 million training steps with batches of 256 sequences of length 512 with an initial learning rate 1e-4.
* We were able to use a single Google Cloud TPU v3-8 provided for free from [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc), while also utilizing [GCP research credits](https://edu.google.com/programs/credits/research). Huge thanks to both Google programs for supporting us!
## Models list
| Model name | Model Path | Training corpora |
| ------------------- | ------------------------------------ | ------------------- |
| CONTRACTS-BERT-BASE | `nlpaueb/bert-base-uncased-contracts` | US contracts |
| EURLEX-BERT-BASE | `nlpaueb/bert-base-uncased-eurlex` | EU legislation |
| ECHR-BERT-BASE | `nlpaueb/bert-base-uncased-echr` | ECHR cases |
| LEGAL-BERT-BASE * | `nlpaueb/legal-bert-base-uncased` | All |
| LEGAL-BERT-SMALL | `nlpaueb/legal-bert-small-uncased` | All |
\* LEGAL-BERT-BASE is the model referred to as LEGAL-BERT-SC in Chalkidis et al. (2020); a model trained from scratch in the legal corpora mentioned below using a newly created vocabulary by a sentence-piece tokenizer trained on the very same corpora.
\*\* As many of you expressed interest in the LEGAL-BERT-FP models (those relying on the original BERT-BASE checkpoint), they have been released in Archive.org (https://archive.org/details/legal_bert_fp), as these models are secondary and possibly only interesting for those who aim to dig deeper in the open questions of Chalkidis et al. (2020).
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/bert-base-uncased-contracts")
model = AutoModel.from_pretrained("nlpaueb/bert-base-uncased-contracts")
```
## Use LEGAL-BERT variants as Language Models
| Corpus | Model | Masked token | Predictions |
| --------------------------------- | ---------------------------------- | ------------ | ------------ |
| | **BERT-BASE-UNCASED** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('new', '0.09'), ('current', '0.04'), ('proposed', '0.03'), ('marketing', '0.03'), ('joint', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.32'), ('rape', '0.22'), ('abuse', '0.14'), ('death', '0.04'), ('violence', '0.03')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('farm', '0.25'), ('livestock', '0.08'), ('draft', '0.06'), ('domestic', '0.05'), ('wild', '0.05')
| | **CONTRACTS-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('letter', '0.38'), ('dealer', '0.04'), ('employment', '0.03'), ('award', '0.03'), ('contribution', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('death', '0.39'), ('imprisonment', '0.07'), ('contempt', '0.05'), ('being', '0.03'), ('crime', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | (('domestic', '0.18'), ('laboratory', '0.07'), ('household', '0.06'), ('personal', '0.06'), ('the', '0.04')
| | **EURLEX-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('supply', '0.11'), ('cooperation', '0.08'), ('service', '0.07'), ('licence', '0.07'), ('distribution', '0.05')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.66'), ('death', '0.07'), ('imprisonment', '0.07'), ('murder', '0.04'), ('rape', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.43'), ('pet', '0.28'), ('certain', '0.05'), ('fur', '0.03'), ('the', '0.02')
| | **ECHR-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('second', '0.24'), ('latter', '0.10'), ('draft', '0.05'), ('bilateral', '0.05'), ('arbitration', '0.04')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.99'), ('death', '0.01'), ('inhuman', '0.00'), ('beating', '0.00'), ('rape', '0.00')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('pet', '0.17'), ('all', '0.12'), ('slaughtered', '0.10'), ('domestic', '0.07'), ('individual', '0.05')
| | **LEGAL-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('settlement', '0.26'), ('letter', '0.23'), ('dealer', '0.04'), ('master', '0.02'), ('supplemental', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '1.00'), ('detention', '0.00'), ('arrest', '0.00'), ('rape', '0.00'), ('death', '0.00')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.67'), ('beef', '0.17'), ('farm', '0.03'), ('pet', '0.02'), ('dairy', '0.01')
| | **LEGAL-BERT-SMALL** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('license', '0.09'), ('transition', '0.08'), ('settlement', '0.04'), ('consent', '0.03'), ('letter', '0.03')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.59'), ('pain', '0.05'), ('ptsd', '0.05'), ('death', '0.02'), ('tuberculosis', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('all', '0.08'), ('live', '0.07'), ('certain', '0.07'), ('the', '0.07'), ('farm', '0.05')
## Evaluation on downstream tasks
Consider the experiments in the article "LEGAL-BERT: The Muppets straight out of Law School". Chalkidis et al., 2020, (https://aclanthology.org/2020.findings-emnlp.261)
## Author - Publication
```
@inproceedings{chalkidis-etal-2020-legal,
title = "{LEGAL}-{BERT}: The Muppets straight out of Law School",
author = "Chalkidis, Ilias and
Fergadiotis, Manos and
Malakasiotis, Prodromos and
Aletras, Nikolaos and
Androutsopoulos, Ion",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
doi = "10.18653/v1/2020.findings-emnlp.261",
pages = "2898--2904"
}
```
## About Us
[AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
The group's current research interests include:
* question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,
* natural language generation from databases and ontologies, especially Semantic Web ontologies,
text classification, including filtering spam and abusive content,
* information extraction and opinion mining, including legal text analytics and sentiment analysis,
* natural language processing tools for Greek, for example parsers and named-entity recognizers,
machine learning in natural language processing, especially deep learning.
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
[Ilias Chalkidis](https://iliaschalkidis.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
nlpaueb/legal-bert-small-uncased | nlpaueb | 2022-04-28T14:43:32Z | 27,608 | 20 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"legal",
"fill-mask",
"en",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: en
pipeline_tag: fill-mask
license: cc-by-sa-4.0
thumbnail: https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png
tags:
- legal
widget:
- text: "The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of police."
---
# LEGAL-BERT: The Muppets straight out of Law School
<img align="left" src="https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png" width="100"/>
LEGAL-BERT is a family of BERT models for the legal domain, intended to assist legal NLP research, computational law, and legal technology applications. To pre-train the different variations of LEGAL-BERT, we collected 12 GB of diverse English legal text from several fields (e.g., legislation, court cases, contracts) scraped from publicly available resources. Sub-domain variants (CONTRACTS-, EURLEX-, ECHR-) and/or general LEGAL-BERT perform better than using BERT out of the box for domain-specific tasks.<br>
This is the light-weight version of BERT-BASE (33% the size of BERT-BASE) pre-trained from scratch on legal data, which achieves comparable performance to larger models, while being much more efficient (approximately 4 times faster) with a smaller environmental footprint.
<br/><br/>
---
I. Chalkidis, M. Fergadiotis, P. Malakasiotis, N. Aletras and I. Androutsopoulos. "LEGAL-BERT: The Muppets straight out of Law School". In Findings of Empirical Methods in Natural Language Processing (EMNLP 2020) (Short Papers), to be held online, 2020. (https://aclanthology.org/2020.findings-emnlp.261)
---
## Pre-training corpora
The pre-training corpora of LEGAL-BERT include:
* 116,062 documents of EU legislation, publicly available from EURLEX (http://eur-lex.europa.eu), the repository of EU Law running under the EU Publication Office.
* 61,826 documents of UK legislation, publicly available from the UK legislation portal (http://www.legislation.gov.uk).
* 19,867 cases from the European Court of Justice (ECJ), also available from EURLEX.
* 12,554 cases from HUDOC, the repository of the European Court of Human Rights (ECHR) (http://hudoc.echr.coe.int/eng).
* 164,141 cases from various courts across the USA, hosted in the Case Law Access Project portal (https://case.law).
* 76,366 US contracts from EDGAR, the database of US Securities and Exchange Commission (SECOM) (https://www.sec.gov/edgar.shtml).
## Pre-training details
* We trained BERT using the official code provided in Google BERT's GitHub repository (https://github.com/google-research/bert).
* We released a model similar to the English BERT-BASE model (12-layer, 768-hidden, 12-heads, 110M parameters).
* We chose to follow the same training set-up: 1 million training steps with batches of 256 sequences of length 512 with an initial learning rate 1e-4.
* We were able to use a single Google Cloud TPU v3-8 provided for free from [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc), while also utilizing [GCP research credits](https://edu.google.com/programs/credits/research). Huge thanks to both Google programs for supporting us!
## Models list
| Model name | Model Path | Training corpora |
| ------------------- | ------------------------------------ | ------------------- |
| CONTRACTS-BERT-BASE | `nlpaueb/bert-base-uncased-contracts` | US contracts |
| EURLEX-BERT-BASE | `nlpaueb/bert-base-uncased-eurlex` | EU legislation |
| ECHR-BERT-BASE | `nlpaueb/bert-base-uncased-echr` | ECHR cases |
| LEGAL-BERT-BASE * | `nlpaueb/legal-bert-base-uncased` | All |
| LEGAL-BERT-SMALL | `nlpaueb/legal-bert-small-uncased` | All |
\* LEGAL-BERT-BASE is the model referred to as LEGAL-BERT-SC in Chalkidis et al. (2020); a model trained from scratch in the legal corpora mentioned below using a newly created vocabulary by a sentence-piece tokenizer trained on the very same corpora.
\*\* As many of you expressed interest in the LEGAL-BERT-FP models (those relying on the original BERT-BASE checkpoint), they have been released in Archive.org (https://archive.org/details/legal_bert_fp), as these models are secondary and possibly only interesting for those who aim to dig deeper in the open questions of Chalkidis et al. (2020).
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/legal-bert-small-uncased")
model = AutoModel.from_pretrained("nlpaueb/legal-bert-small-uncased")
```
## Use LEGAL-BERT variants as Language Models
| Corpus | Model | Masked token | Predictions |
| --------------------------------- | ---------------------------------- | ------------ | ------------ |
| | **BERT-BASE-UNCASED** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('new', '0.09'), ('current', '0.04'), ('proposed', '0.03'), ('marketing', '0.03'), ('joint', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.32'), ('rape', '0.22'), ('abuse', '0.14'), ('death', '0.04'), ('violence', '0.03')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('farm', '0.25'), ('livestock', '0.08'), ('draft', '0.06'), ('domestic', '0.05'), ('wild', '0.05')
| | **CONTRACTS-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('letter', '0.38'), ('dealer', '0.04'), ('employment', '0.03'), ('award', '0.03'), ('contribution', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('death', '0.39'), ('imprisonment', '0.07'), ('contempt', '0.05'), ('being', '0.03'), ('crime', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | (('domestic', '0.18'), ('laboratory', '0.07'), ('household', '0.06'), ('personal', '0.06'), ('the', '0.04')
| | **EURLEX-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('supply', '0.11'), ('cooperation', '0.08'), ('service', '0.07'), ('licence', '0.07'), ('distribution', '0.05')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.66'), ('death', '0.07'), ('imprisonment', '0.07'), ('murder', '0.04'), ('rape', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.43'), ('pet', '0.28'), ('certain', '0.05'), ('fur', '0.03'), ('the', '0.02')
| | **ECHR-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('second', '0.24'), ('latter', '0.10'), ('draft', '0.05'), ('bilateral', '0.05'), ('arbitration', '0.04')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.99'), ('death', '0.01'), ('inhuman', '0.00'), ('beating', '0.00'), ('rape', '0.00')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('pet', '0.17'), ('all', '0.12'), ('slaughtered', '0.10'), ('domestic', '0.07'), ('individual', '0.05')
| | **LEGAL-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('settlement', '0.26'), ('letter', '0.23'), ('dealer', '0.04'), ('master', '0.02'), ('supplemental', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '1.00'), ('detention', '0.00'), ('arrest', '0.00'), ('rape', '0.00'), ('death', '0.00')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.67'), ('beef', '0.17'), ('farm', '0.03'), ('pet', '0.02'), ('dairy', '0.01')
| | **LEGAL-BERT-SMALL** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('license', '0.09'), ('transition', '0.08'), ('settlement', '0.04'), ('consent', '0.03'), ('letter', '0.03')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.59'), ('pain', '0.05'), ('ptsd', '0.05'), ('death', '0.02'), ('tuberculosis', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('all', '0.08'), ('live', '0.07'), ('certain', '0.07'), ('the', '0.07'), ('farm', '0.05')
## Evaluation on downstream tasks
Consider the experiments in the article "LEGAL-BERT: The Muppets straight out of Law School". Chalkidis et al., 2020, (https://aclanthology.org/2020.findings-emnlp.261)
## Author - Publication
```
@inproceedings{chalkidis-etal-2020-legal,
title = "{LEGAL}-{BERT}: The Muppets straight out of Law School",
author = "Chalkidis, Ilias and
Fergadiotis, Manos and
Malakasiotis, Prodromos and
Aletras, Nikolaos and
Androutsopoulos, Ion",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
doi = "10.18653/v1/2020.findings-emnlp.261",
pages = "2898--2904"
}
```
## About Us
[AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
The group's current research interests include:
* question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,
* natural language generation from databases and ontologies, especially Semantic Web ontologies,
text classification, including filtering spam and abusive content,
* information extraction and opinion mining, including legal text analytics and sentiment analysis,
* natural language processing tools for Greek, for example parsers and named-entity recognizers,
machine learning in natural language processing, especially deep learning.
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
[Ilias Chalkidis](https://iliaschalkidis.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
nlpaueb/legal-bert-base-uncased | nlpaueb | 2022-04-28T14:42:50Z | 525,138 | 197 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"pretraining",
"legal",
"fill-mask",
"en",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: en
pipeline_tag: fill-mask
license: cc-by-sa-4.0
thumbnail: https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png
tags:
- legal
widget:
- text: "The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of police."
---
# LEGAL-BERT: The Muppets straight out of Law School
<img align="left" src="https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png" width="100"/>
LEGAL-BERT is a family of BERT models for the legal domain, intended to assist legal NLP research, computational law, and legal technology applications. To pre-train the different variations of LEGAL-BERT, we collected 12 GB of diverse English legal text from several fields (e.g., legislation, court cases, contracts) scraped from publicly available resources. Sub-domain variants (CONTRACTS-, EURLEX-, ECHR-) and/or general LEGAL-BERT perform better than using BERT out of the box for domain-specific tasks. A light-weight model (33% the size of BERT-BASE) pre-trained from scratch on legal data with competitive performance is also available.
<br/><br/>
---
I. Chalkidis, M. Fergadiotis, P. Malakasiotis, N. Aletras and I. Androutsopoulos. "LEGAL-BERT: The Muppets straight out of Law School". In Findings of Empirical Methods in Natural Language Processing (EMNLP 2020) (Short Papers), to be held online, 2020. (https://aclanthology.org/2020.findings-emnlp.261)
---
## Pre-training corpora
The pre-training corpora of LEGAL-BERT include:
* 116,062 documents of EU legislation, publicly available from EURLEX (http://eur-lex.europa.eu), the repository of EU Law running under the EU Publication Office.
* 61,826 documents of UK legislation, publicly available from the UK legislation portal (http://www.legislation.gov.uk).
* 19,867 cases from the European Court of Justice (ECJ), also available from EURLEX.
* 12,554 cases from HUDOC, the repository of the European Court of Human Rights (ECHR) (http://hudoc.echr.coe.int/eng).
* 164,141 cases from various courts across the USA, hosted in the Case Law Access Project portal (https://case.law).
* 76,366 US contracts from EDGAR, the database of US Securities and Exchange Commission (SECOM) (https://www.sec.gov/edgar.shtml).
## Pre-training details
* We trained BERT using the official code provided in Google BERT's GitHub repository (https://github.com/google-research/bert).
* We released a model similar to the English BERT-BASE model (12-layer, 768-hidden, 12-heads, 110M parameters).
* We chose to follow the same training set-up: 1 million training steps with batches of 256 sequences of length 512 with an initial learning rate 1e-4.
* We were able to use a single Google Cloud TPU v3-8 provided for free from [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc), while also utilizing [GCP research credits](https://edu.google.com/programs/credits/research). Huge thanks to both Google programs for supporting us!
* Part of LEGAL-BERT is a light-weight model pre-trained from scratch on legal data, which achieves comparable performance to larger models, while being much more efficient (approximately 4 times faster) with a smaller environmental footprint.
## Models list
| Model name | Model Path | Training corpora |
| ------------------- | ------------------------------------ | ------------------- |
| CONTRACTS-BERT-BASE | `nlpaueb/bert-base-uncased-contracts` | US contracts |
| EURLEX-BERT-BASE | `nlpaueb/bert-base-uncased-eurlex` | EU legislation |
| ECHR-BERT-BASE | `nlpaueb/bert-base-uncased-echr` | ECHR cases |
| LEGAL-BERT-BASE * | `nlpaueb/legal-bert-base-uncased` | All |
| LEGAL-BERT-SMALL | `nlpaueb/legal-bert-small-uncased` | All |
\* LEGAL-BERT-BASE is the model referred to as LEGAL-BERT-SC in Chalkidis et al. (2020); a model trained from scratch in the legal corpora mentioned below using a newly created vocabulary by a sentence-piece tokenizer trained on the very same corpora.
\*\* As many of you expressed interest in the LEGAL-BERT-FP models (those relying on the original BERT-BASE checkpoint), they have been released in Archive.org (https://archive.org/details/legal_bert_fp), as these models are secondary and possibly only interesting for those who aim to dig deeper in the open questions of Chalkidis et al. (2020).
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/legal-bert-base-uncased")
model = AutoModel.from_pretrained("nlpaueb/legal-bert-base-uncased")
```
## Use LEGAL-BERT variants as Language Models
| Corpus | Model | Masked token | Predictions |
| --------------------------------- | ---------------------------------- | ------------ | ------------ |
| | **BERT-BASE-UNCASED** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('new', '0.09'), ('current', '0.04'), ('proposed', '0.03'), ('marketing', '0.03'), ('joint', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.32'), ('rape', '0.22'), ('abuse', '0.14'), ('death', '0.04'), ('violence', '0.03')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('farm', '0.25'), ('livestock', '0.08'), ('draft', '0.06'), ('domestic', '0.05'), ('wild', '0.05')
| | **CONTRACTS-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('letter', '0.38'), ('dealer', '0.04'), ('employment', '0.03'), ('award', '0.03'), ('contribution', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('death', '0.39'), ('imprisonment', '0.07'), ('contempt', '0.05'), ('being', '0.03'), ('crime', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | (('domestic', '0.18'), ('laboratory', '0.07'), ('household', '0.06'), ('personal', '0.06'), ('the', '0.04')
| | **EURLEX-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('supply', '0.11'), ('cooperation', '0.08'), ('service', '0.07'), ('licence', '0.07'), ('distribution', '0.05')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.66'), ('death', '0.07'), ('imprisonment', '0.07'), ('murder', '0.04'), ('rape', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.43'), ('pet', '0.28'), ('certain', '0.05'), ('fur', '0.03'), ('the', '0.02')
| | **ECHR-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('second', '0.24'), ('latter', '0.10'), ('draft', '0.05'), ('bilateral', '0.05'), ('arbitration', '0.04')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.99'), ('death', '0.01'), ('inhuman', '0.00'), ('beating', '0.00'), ('rape', '0.00')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('pet', '0.17'), ('all', '0.12'), ('slaughtered', '0.10'), ('domestic', '0.07'), ('individual', '0.05')
| | **LEGAL-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('settlement', '0.26'), ('letter', '0.23'), ('dealer', '0.04'), ('master', '0.02'), ('supplemental', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '1.00'), ('detention', '0.00'), ('arrest', '0.00'), ('rape', '0.00'), ('death', '0.00')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.67'), ('beef', '0.17'), ('farm', '0.03'), ('pet', '0.02'), ('dairy', '0.01')
| | **LEGAL-BERT-SMALL** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('license', '0.09'), ('transition', '0.08'), ('settlement', '0.04'), ('consent', '0.03'), ('letter', '0.03')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.59'), ('pain', '0.05'), ('ptsd', '0.05'), ('death', '0.02'), ('tuberculosis', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('all', '0.08'), ('live', '0.07'), ('certain', '0.07'), ('the', '0.07'), ('farm', '0.05')
## Evaluation on downstream tasks
Consider the experiments in the article "LEGAL-BERT: The Muppets straight out of Law School". Chalkidis et al., 2020, (https://aclanthology.org/2020.findings-emnlp.261)
## Author - Publication
```
@inproceedings{chalkidis-etal-2020-legal,
title = "{LEGAL}-{BERT}: The Muppets straight out of Law School",
author = "Chalkidis, Ilias and
Fergadiotis, Manos and
Malakasiotis, Prodromos and
Aletras, Nikolaos and
Androutsopoulos, Ion",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
doi = "10.18653/v1/2020.findings-emnlp.261",
pages = "2898--2904"
}
```
## About Us
[AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
The group's current research interests include:
* question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,
* natural language generation from databases and ontologies, especially Semantic Web ontologies,
text classification, including filtering spam and abusive content,
* information extraction and opinion mining, including legal text analytics and sentiment analysis,
* natural language processing tools for Greek, for example parsers and named-entity recognizers,
machine learning in natural language processing, especially deep learning.
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
[Ilias Chalkidis](https://iliaschalkidis.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
horychtom/czech_media_bias_classifier | horychtom | 2022-04-28T13:51:18Z | 4 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"Czech",
"cs",
"autotrain_compatible",
"region:us"
] | text-classification | 2022-04-04T09:04:34Z | ---
inference: false
language: "cs"
tags:
- Czech
---
## Czech Media Bias Classifier
A FERNET-C5 model fine-tuned to perform binary classification task on czech media bias detection. |
anton-l/xtreme_s_xlsr_300m_fleurs_asr_en_us | anton-l | 2022-04-28T12:39:54Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"fleurs-asr",
"google/xtreme_s",
"generated_from_trainer",
"dataset:google/xtreme_s",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-28T10:45:25Z | ---
language:
- en_us
license: apache-2.0
tags:
- fleurs-asr
- google/xtreme_s
- generated_from_trainer
datasets:
- google/xtreme_s
model-index:
- name: xtreme_s_xlsr_300m_fleurs_asr_en_us
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_300m_fleurs_asr_en_us
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - FLEURS.EN_US dataset.
It achieves the following results on the evaluation set:
- Cer: 0.1356
- Loss: 0.5599
- Wer: 0.3148
- Predict Samples: 647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 2.8769 | 5.0 | 200 | 2.8871 | 1.0 | 0.9878 |
| 0.2458 | 10.0 | 400 | 0.5570 | 0.4899 | 0.1951 |
| 0.0762 | 15.0 | 600 | 0.5213 | 0.3727 | 0.1562 |
| 0.0334 | 20.0 | 800 | 0.5742 | 0.3666 | 0.1543 |
| 0.0244 | 25.0 | 1000 | 0.5907 | 0.3546 | 0.1499 |
| 0.0143 | 30.0 | 1200 | 0.5961 | 0.3460 | 0.1469 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.1+cu111
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
anton-l/xtreme_s_xlsr_300m_fleurs_asr_western_european | anton-l | 2022-04-28T09:56:22Z | 23 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"fleurs-asr",
"google/xtreme_s",
"generated_from_trainer",
"all",
"dataset:google/xtreme_s",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-27T10:27:11Z | ---
language:
- all
license: apache-2.0
tags:
- fleurs-asr
- google/xtreme_s
- generated_from_trainer
datasets:
- google/xtreme_s
model-index:
- name: xtreme_s_xlsr_300m_fleurs_asr_western_european
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_300m_fleurs_asr_western_european
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - FLEURS.ALL dataset.
It achieves the following results on the evaluation set:
- Cer: 0.2484
- Cer Ast Es: 0.1598
- Cer Bs Ba: 0.1749
- Cer Ca Es: 0.1655
- Cer Cy Gb: 0.2280
- Cer Da Dk: 0.3616
- Cer De De: 0.1287
- Cer El Gr: 0.6020
- Cer En Us: 0.1938
- Cer Es 419: 0.1288
- Cer Fi Fi: 0.2050
- Cer Fr Fr: 0.1811
- Cer Ga Ie: 0.4474
- Cer Gl Es: 0.1324
- Cer Hr Hr: 0.1555
- Cer Hu Hu: 0.3911
- Cer Is Is: 0.4646
- Cer It It: 0.1283
- Cer Kea Cv: 0.1818
- Cer Lb Lu: 0.2594
- Cer Mt Mt: 0.3628
- Cer Nb No: 0.2254
- Cer Nl Nl: 0.1790
- Cer Oci Fr: 0.2159
- Cer Pt Br: 0.2275
- Cer Sv Se: 0.3092
- Loss: 1.3089
- Loss Ast Es: 0.7715
- Loss Bs Ba: 0.7378
- Loss Ca Es: 0.7868
- Loss Cy Gb: 1.1441
- Loss Da Dk: 1.9130
- Loss De De: 0.5391
- Loss El Gr: 3.4904
- Loss En Us: 0.9632
- Loss Es 419: 0.6186
- Loss Fi Fi: 0.8953
- Loss Fr Fr: 0.9076
- Loss Ga Ie: 3.0217
- Loss Gl Es: 0.5788
- Loss Hr Hr: 0.6462
- Loss Hu Hu: 1.9029
- Loss Is Is: 2.6551
- Loss It It: 0.6052
- Loss Kea Cv: 0.9107
- Loss Lb Lu: 1.3705
- Loss Mt Mt: 2.3651
- Loss Nb No: 1.1518
- Loss Nl Nl: 0.8490
- Loss Oci Fr: 1.1421
- Loss Pt Br: 1.1641
- Loss Sv Se: 1.5910
- Wer: 0.6451
- Wer Ast Es: 0.4654
- Wer Bs Ba: 0.5443
- Wer Ca Es: 0.4979
- Wer Cy Gb: 0.5962
- Wer Da Dk: 0.8455
- Wer De De: 0.4221
- Wer El Gr: 0.9805
- Wer En Us: 0.4556
- Wer Es 419: 0.3928
- Wer Fi Fi: 0.8116
- Wer Fr Fr: 0.4690
- Wer Ga Ie: 0.8519
- Wer Gl Es: 0.4245
- Wer Hr Hr: 0.4895
- Wer Hu Hu: 0.9099
- Wer Is Is: 0.9960
- Wer It It: 0.4415
- Wer Kea Cv: 0.5202
- Wer Lb Lu: 0.7225
- Wer Mt Mt: 1.0096
- Wer Nb No: 0.6541
- Wer Nl Nl: 0.5257
- Wer Oci Fr: 0.5770
- Wer Pt Br: 0.6685
- Wer Sv Se: 0.8546
- Predict Samples: 20043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 3.1411 | 0.49 | 500 | 3.1673 | 1.0 | 1.0 |
| 0.6397 | 0.97 | 1000 | 0.9039 | 0.7171 | 0.2862 |
| 0.4033 | 1.46 | 1500 | 0.8914 | 0.6862 | 0.2763 |
| 0.3473 | 1.94 | 2000 | 0.8017 | 0.6505 | 0.2536 |
| 0.3143 | 2.43 | 2500 | 0.8568 | 0.6566 | 0.2627 |
| 0.3004 | 2.91 | 3000 | 0.8898 | 0.6640 | 0.2686 |
| 0.282 | 3.4 | 3500 | 0.8489 | 0.6637 | 0.2571 |
| 0.2489 | 3.88 | 4000 | 0.8955 | 0.6744 | 0.2691 |
| 0.1706 | 4.37 | 4500 | 0.9190 | 0.6788 | 0.2688 |
| 0.3336 | 4.85 | 5000 | 0.8915 | 0.6594 | 0.2572 |
| 0.1426 | 5.34 | 5500 | 0.9501 | 0.6784 | 0.2686 |
| 0.2301 | 5.83 | 6000 | 1.0217 | 0.6719 | 0.2735 |
| 0.1325 | 6.31 | 6500 | 0.9578 | 0.6691 | 0.2655 |
| 0.1145 | 6.8 | 7000 | 0.9129 | 0.6680 | 0.2593 |
| 0.1202 | 7.28 | 7500 | 0.9646 | 0.6749 | 0.2619 |
| 0.143 | 7.77 | 8000 | 0.9200 | 0.6554 | 0.2554 |
| 0.1012 | 8.25 | 8500 | 0.9553 | 0.6787 | 0.2628 |
| 0.1018 | 8.74 | 9000 | 0.9455 | 0.6445 | 0.2511 |
| 0.1148 | 9.22 | 9500 | 1.0206 | 0.6725 | 0.2629 |
| 0.0794 | 9.71 | 10000 | 0.9305 | 0.6547 | 0.2526 |
| 0.2891 | 10.19 | 10500 | 1.0424 | 0.6709 | 0.2570 |
| 0.1665 | 10.68 | 11000 | 0.9760 | 0.6596 | 0.2507 |
| 0.1956 | 11.17 | 11500 | 0.9549 | 0.6340 | 0.2440 |
| 0.0828 | 11.65 | 12000 | 0.9598 | 0.6403 | 0.2460 |
| 0.059 | 12.14 | 12500 | 0.9972 | 0.6574 | 0.2531 |
| 0.0505 | 12.62 | 13000 | 0.9836 | 0.6534 | 0.2525 |
| 0.0336 | 13.11 | 13500 | 1.0619 | 0.6564 | 0.2519 |
| 0.0435 | 13.59 | 14000 | 1.0844 | 0.6480 | 0.2543 |
| 0.0216 | 14.08 | 14500 | 1.1084 | 0.6512 | 0.2521 |
| 0.0265 | 14.56 | 15000 | 1.1152 | 0.6607 | 0.2563 |
| 0.0975 | 15.05 | 15500 | 1.1060 | 0.6456 | 0.2471 |
| 0.1396 | 15.53 | 16000 | 1.1100 | 0.6337 | 0.2418 |
| 0.0701 | 16.02 | 16500 | 1.1731 | 0.6309 | 0.2415 |
| 0.1171 | 16.5 | 17000 | 1.1302 | 0.6315 | 0.2396 |
| 0.0778 | 16.99 | 17500 | 1.1485 | 0.6379 | 0.2447 |
| 0.0642 | 17.48 | 18000 | 1.2009 | 0.6400 | 0.2464 |
| 0.0322 | 17.96 | 18500 | 1.2028 | 0.6357 | 0.2425 |
| 0.031 | 18.45 | 19000 | 1.2381 | 0.6285 | 0.2416 |
| 0.0579 | 18.93 | 19500 | 1.2299 | 0.6265 | 0.2409 |
| 0.0628 | 19.42 | 20000 | 1.2582 | 0.6277 | 0.2395 |
| 0.074 | 19.9 | 20500 | 1.2572 | 0.6278 | 0.2394 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.1+cu111
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
daveni/twitter-xlm-roberta-emotion-es | daveni | 2022-04-28T09:49:06Z | 602 | 21 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"Emotion Analysis",
"es",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- es
tags:
- Emotion Analysis
---
**Note**: This model & model card are based on the [finetuned XLM-T for Sentiment Analysis](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment)
# twitter-XLM-roBERTa-base for Emotion Analysis
This is a XLM-roBERTa-base model trained on ~198M tweets and finetuned for emotion analysis on Spanish language. This model was presented to EmoEvalEs competition, part of [IberLEF 2021 Conference](https://sites.google.com/view/iberlef2021/), where the proposed task was the classification of Spanish tweets between seven different classes: *anger*, *disgust*, *fear*, *joy*, *sadness*, *surprise*, and *other*. We achieved the first position in the competition with a macro-averaged F1 score of 71.70%.
- [Our code for EmoEvalEs submission](https://github.com/gsi-upm/emoevales-iberlef2021).
- [EmoEvalEs Dataset](https://github.com/pendrag/EmoEvalEs)
## Example Pipeline with a [Tweet from @JaSantaolalla](https://twitter.com/JaSantaolalla/status/1398383243645177860)
```python
from transformers import pipeline
model_path = "daveni/twitter-xlm-roberta-emotion-es"
emotion_analysis = pipeline("text-classification", framework="pt", model=model_path, tokenizer=model_path)
emotion_analysis("Einstein dijo: Solo hay dos cosas infinitas, el universo y los pinches anuncios de bitcoin en Twitter. Paren ya carajo aaaaaaghhgggghhh me quiero murir")
```
```
[{'label': 'anger', 'score': 0.48307016491889954}]
```
## Full classification example
```python
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer, AutoConfig
import numpy as np
from scipy.special import softmax
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
model_path = "daveni/twitter-xlm-roberta-emotion-es"
tokenizer = AutoTokenizer.from_pretrained(model_path )
config = AutoConfig.from_pretrained(model_path )
# PT
model = AutoModelForSequenceClassification.from_pretrained(model_path )
text = "Se ha quedao bonito día para publicar vídeo, ¿no? Hoy del tema más diferente que hemos tocado en el canal."
text = preprocess(text)
print(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# Print labels and scores
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
Se ha quedao bonito día para publicar vídeo, ¿no? Hoy del tema más diferente que hemos tocado en el canal.
1) joy 0.7887
2) others 0.1679
3) surprise 0.0152
4) sadness 0.0145
5) anger 0.0077
6) disgust 0.0033
7) fear 0.0027
```
#### Limitations and bias
- The dataset we used for finetuning was unbalanced, where almost half of the records belonged to the *other* class so there might be bias towards this class.
## Training data
Pretrained weights were left identical to the original model released by [cardiffnlp](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base). We used the [EmoEvalEs Dataset](https://github.com/pendrag/EmoEvalEs) for finetuning.
### BibTeX entry and citation info
```bibtex
@inproceedings{vera2021gsi,
title={GSI-UPM at IberLEF2021: Emotion Analysis of Spanish Tweets by Fine-tuning the XLM-RoBERTa Language Model},
author={Vera, D and Araque, O and Iglesias, CA},
booktitle={Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2021). CEUR Workshop Proceedings, CEUR-WS, M{\'a}laga, Spain},
year={2021}
}
``` |
bdickson/bert-base-uncased-finetuned-squad | bdickson | 2022-04-28T07:30:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-04-28T00:58:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.1240
- eval_runtime: 262.7193
- eval_samples_per_second: 41.048
- eval_steps_per_second: 2.565
- epoch: 3.0
- step: 16599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Lilya/distilbert-base-uncased-finetuned-ner-TRANS | Lilya | 2022-04-28T07:00:58Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-27T11:44:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner-TRANS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner-TRANS
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1053
- Precision: 0.7911
- Recall: 0.8114
- F1: 0.8011
- Accuracy: 0.9815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.077 | 1.0 | 3762 | 0.0724 | 0.7096 | 0.7472 | 0.7279 | 0.9741 |
| 0.0538 | 2.0 | 7524 | 0.0652 | 0.7308 | 0.7687 | 0.7493 | 0.9766 |
| 0.0412 | 3.0 | 11286 | 0.0643 | 0.7672 | 0.7875 | 0.7772 | 0.9788 |
| 0.0315 | 4.0 | 15048 | 0.0735 | 0.7646 | 0.7966 | 0.7803 | 0.9793 |
| 0.0249 | 5.0 | 18810 | 0.0772 | 0.7805 | 0.7981 | 0.7892 | 0.9801 |
| 0.0213 | 6.0 | 22572 | 0.0783 | 0.7829 | 0.8063 | 0.7944 | 0.9805 |
| 0.0187 | 7.0 | 26334 | 0.0858 | 0.7821 | 0.8010 | 0.7914 | 0.9809 |
| 0.0157 | 8.0 | 30096 | 0.0860 | 0.7837 | 0.8120 | 0.7976 | 0.9812 |
| 0.0122 | 9.0 | 33858 | 0.0963 | 0.7857 | 0.8129 | 0.7990 | 0.9813 |
| 0.0107 | 10.0 | 37620 | 0.0993 | 0.7934 | 0.8089 | 0.8010 | 0.9812 |
| 0.0091 | 11.0 | 41382 | 0.1031 | 0.7882 | 0.8123 | 0.8001 | 0.9814 |
| 0.0083 | 12.0 | 45144 | 0.1053 | 0.7911 | 0.8114 | 0.8011 | 0.9815 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
OWG/resnet-50 | OWG | 2022-04-28T06:54:33Z | 0 | 1 | null | [
"onnx",
"ResNet-50",
"en",
"arxiv:1512.03385",
"region:us"
] | null | 2022-04-28T06:22:56Z | ---
language:
- en
tags:
- ResNet-50
---
# ResNet-50
## Model Description
ResNet-50 model from [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) paper.
## Original implementation
Follow [this link](https://huggingface.co/microsoft/resnet-50) to see the original implementation.
# How to use
You can use the `base` model that returns `last_hidden_state`.
```python
from transformers import AutoFeatureExtractor
from onnxruntime import InferenceSession
from datasets import load_dataset
# load image
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
# load model
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")
session = InferenceSession("onnx/model.onnx")
# ONNX Runtime expects NumPy arrays as input
inputs = feature_extractor(image, return_tensors="np")
outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```
Or you can use the model with classification head that returns `logits`.
```python
from transformers import AutoFeatureExtractor
from onnxruntime import InferenceSession
from datasets import load_dataset
# load image
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
# load model
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")
session = InferenceSession("onnx/model_cls.onnx")
# ONNX Runtime expects NumPy arrays as input
inputs = feature_extractor(image, return_tensors="np")
outputs = session.run(output_names=["logits"], input_feed=dict(inputs))
```
|
bdickson/electra-small-discriminator-finetuned-squad-finetuned-squad | bdickson | 2022-04-28T06:40:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-04-28T06:16:38Z | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: electra-small-discriminator-finetuned-squad-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-discriminator-finetuned-squad-finetuned-squad
This model is a fine-tuned version of [bdickson/electra-small-discriminator-finetuned-squad](https://huggingface.co/bdickson/electra-small-discriminator-finetuned-squad) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Das282000Prit/fyp-finetuned-imdb | Das282000Prit | 2022-04-28T05:53:55Z | 4 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-04-28T05:46:39Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Das282000Prit/fyp-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Das282000Prit/fyp-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8566
- Validation Loss: 2.6019
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8566 | 2.6019 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
snunlp/KR-FinBert | snunlp | 2022-04-28T05:06:40Z | 263 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language:
- ko
---
# KR-FinBert & KR-FinBert-SC
Much progress has been made in the NLP (Natural Language Processing) field, with numerous studies showing that domain adaptation using small-scale corpus and fine-tuning with labeled data is effective for overall performance improvement.
we proposed KR-FinBert for the financial domain by further pre-training it on a financial corpus and fine-tuning it for sentiment analysis. As many studies have shown, the performance improvement through adaptation and conducting the downstream task was also clear in this experiment.

## Data
The training data for this model is expanded from those of **[KR-BERT-MEDIUM](https://huggingface.co/snunlp/KR-Medium)**, texts from Korean Wikipedia, general news articles, legal texts crawled from the National Law Information Center and [Korean Comments dataset](https://www.kaggle.com/junbumlee/kcbert-pretraining-corpus-korean-news-comments). For the transfer learning, **corporate related economic news articles from 72 media sources** such as the Financial Times, The Korean Economy Daily, etc and **analyst reports from 16 securities companies** such as Kiwoom Securities, Samsung Securities, etc are added. Included in the dataset is 440,067 news titles with their content and 11,237 analyst reports. **The total data size is about 13.22GB.** For mlm training, we split the data line by line and **the total no. of lines is 6,379,315.**
KR-FinBert is trained for 5.5M steps with the maxlen of 512, training batch size of 32, and learning rate of 5e-5, taking 67.48 hours to train the model using NVIDIA TITAN XP.
## Citation
```
@misc{kr-FinBert,
author = {Kim, Eunhee and Hyopil Shin},
title = {KR-FinBert: KR-BERT-Medium Adapted With Financial Domain Data},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://huggingface.co/snunlp/KR-FinBert}}
}
``` |
chv5/t5-small-shuffled_take1 | chv5 | 2022-04-28T03:36:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-27T20:27:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-shuffled_take1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 11.9641
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-shuffled_take1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1788
- Rouge1: 11.9641
- Rouge2: 10.5245
- Rougel: 11.5825
- Rougelsum: 11.842
- Gen Len: 18.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.2238 | 1.0 | 34008 | 0.1788 | 11.9641 | 10.5245 | 11.5825 | 11.842 | 18.9838 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
caush/Clickbait5 | caush | 2022-04-28T03:15:08Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-28T02:50:04Z | ---
tags:
- generated_from_trainer
model-index:
- name: Clickbait5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clickbait5
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.04 | 50 | 0.0258 |
| No log | 0.08 | 100 | 0.0269 |
| No log | 0.12 | 150 | 0.0259 |
| No log | 0.16 | 200 | 0.0260 |
| No log | 0.21 | 250 | 0.0267 |
| No log | 0.25 | 300 | 0.0276 |
| No log | 0.29 | 350 | 0.0284 |
| No log | 0.33 | 400 | 0.0270 |
| No log | 0.37 | 450 | 0.0269 |
| 0.0195 | 0.41 | 500 | 0.0260 |
| 0.0195 | 0.45 | 550 | 0.0284 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Ahmed9275/ALL-2 | Ahmed9275 | 2022-04-28T02:07:25Z | 64 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-04-28T02:07:14Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ALL-2
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9855383038520813
---
# ALL-2
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images |
caush/Clickbait3 | caush | 2022-04-28T02:06:02Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-28T01:53:58Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Clickbait3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clickbait3
This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.05 | 50 | 0.0373 |
| No log | 0.1 | 100 | 0.0320 |
| No log | 0.15 | 150 | 0.0295 |
| No log | 0.21 | 200 | 0.0302 |
| No log | 0.26 | 250 | 0.0331 |
| No log | 0.31 | 300 | 0.0280 |
| No log | 0.36 | 350 | 0.0277 |
| No log | 0.41 | 400 | 0.0316 |
| No log | 0.46 | 450 | 0.0277 |
| 0.0343 | 0.51 | 500 | 0.0276 |
| 0.0343 | 0.56 | 550 | 0.0282 |
| 0.0343 | 0.62 | 600 | 0.0280 |
| 0.0343 | 0.67 | 650 | 0.0271 |
| 0.0343 | 0.72 | 700 | 0.0264 |
| 0.0343 | 0.77 | 750 | 0.0265 |
| 0.0343 | 0.82 | 800 | 0.0260 |
| 0.0343 | 0.87 | 850 | 0.0263 |
| 0.0343 | 0.92 | 900 | 0.0259 |
| 0.0343 | 0.97 | 950 | 0.0277 |
| 0.0278 | 1.03 | 1000 | 0.0281 |
| 0.0278 | 1.08 | 1050 | 0.0294 |
| 0.0278 | 1.13 | 1100 | 0.0256 |
| 0.0278 | 1.18 | 1150 | 0.0258 |
| 0.0278 | 1.23 | 1200 | 0.0254 |
| 0.0278 | 1.28 | 1250 | 0.0265 |
| 0.0278 | 1.33 | 1300 | 0.0252 |
| 0.0278 | 1.38 | 1350 | 0.0251 |
| 0.0278 | 1.44 | 1400 | 0.0264 |
| 0.0278 | 1.49 | 1450 | 0.0262 |
| 0.023 | 1.54 | 1500 | 0.0272 |
| 0.023 | 1.59 | 1550 | 0.0278 |
| 0.023 | 1.64 | 1600 | 0.0255 |
| 0.023 | 1.69 | 1650 | 0.0258 |
| 0.023 | 1.74 | 1700 | 0.0262 |
| 0.023 | 1.79 | 1750 | 0.0250 |
| 0.023 | 1.85 | 1800 | 0.0253 |
| 0.023 | 1.9 | 1850 | 0.0271 |
| 0.023 | 1.95 | 1900 | 0.0248 |
| 0.023 | 2.0 | 1950 | 0.0258 |
| 0.0224 | 2.05 | 2000 | 0.0252 |
| 0.0224 | 2.1 | 2050 | 0.0259 |
| 0.0224 | 2.15 | 2100 | 0.0254 |
| 0.0224 | 2.21 | 2150 | 0.0260 |
| 0.0224 | 2.26 | 2200 | 0.0254 |
| 0.0224 | 2.31 | 2250 | 0.0266 |
| 0.0224 | 2.36 | 2300 | 0.0258 |
| 0.0224 | 2.41 | 2350 | 0.0258 |
| 0.0224 | 2.46 | 2400 | 0.0256 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
yihsuan/best_model_0427_small_long | yihsuan | 2022-04-28T01:51:38Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"mT5",
"zh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-04-27T09:08:17Z | ---
tags:
- summarization
- mT5
language:
- zh
widget:
- text: "專家稱維康桑格研究所(Wellcome Sanger Institute)的上述研究發現「令人震驚」而且「發人深省」。基因變異指關於我們身體成長和管理的相關指令,也就是DNA當中發生的變化。長期以來,變異一直被當作癌症的根源,但是數十年來關於變異是否對衰老有重要影響一直存在爭論。桑格研究所的研究人員說他們得到了「第一個試驗性證據」,證明了兩者的關係。他們分析了預期壽命各異的物種基因變異的不同速度。研究人員分析了貓、黑白疣猴、狗、雪貂、長頸鹿、馬、人、獅子、裸鼴鼠、兔子、老鼠、環尾狐猴和老虎等十幾種動物的DNA。發表在《自然》雜誌上的研究顯示,老鼠在短暫的生命當中每年經歷了將近800次變異,老鼠的壽命一般不到4年。"
inference:
parameters:
max_length: 120
--- |
yihsuan/best_model_0426_base | yihsuan | 2022-04-28T01:44:27Z | 15 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"mT5",
"zh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-04-26T09:05:10Z | ---
tags:
- summarization
- mT5
language:
- zh
widget:
- text: "專家稱維康桑格研究所(Wellcome Sanger Institute)的上述研究發現「令人震驚」而且「發人深省」。基因變異指關於我們身體成長和管理的相關指令,也就是DNA當中發生的變化。長期以來,變異一直被當作癌症的根源,但是數十年來關於變異是否對衰老有重要影響一直存在爭論。桑格研究所的研究人員說他們得到了「第一個試驗性證據」,證明了兩者的關係。他們分析了預期壽命各異的物種基因變異的不同速度。研究人員分析了貓、黑白疣猴、狗、雪貂、長頸鹿、馬、人、獅子、裸鼴鼠、兔子、老鼠、環尾狐猴和老虎等十幾種動物的DNA。發表在《自然》雜誌上的研究顯示,老鼠在短暫的生命當中每年經歷了將近800次變異,老鼠的壽命一般不到4年。"
inference:
parameters:
max_length: 50
--- |
Ahmed9275/ALL | Ahmed9275 | 2022-04-28T01:01:23Z | 62 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-04-28T01:00:00Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ALL
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9262039065361023
---
# ALL
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images |
SerdarHelli/Brain-MRI-GAN | SerdarHelli | 2022-04-27T20:32:07Z | 0 | 0 | null | [
"brainMRI",
"GAN",
"medicalimaging",
"pytorch",
"region:us"
] | null | 2022-04-27T19:07:39Z | ---
tags:
- brainMRI
- GAN
- medicalimaging
- pytorch
metrics:
- fid50k
---
The model's kernels etc. source code ==> https://github.com/NVlabs/stylegan3 |
gagan3012/ArOCRv4 | gagan3012 | 2022-04-27T20:23:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"doi:10.57967/hf/0018",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2022-04-27T18:49:46Z | ---
tags:
- generated_from_trainer
model-index:
- name: ArOCRv4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArOCRv4
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5811
- Cer: 0.1249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 3.103 | 1.18 | 1000 | 8.0852 | 11.5974 |
| 1.2535 | 2.36 | 2000 | 2.0400 | 0.4904 |
| 0.5682 | 3.55 | 3000 | 1.9336 | 0.2145 |
| 0.3038 | 4.73 | 4000 | 1.5811 | 0.1249 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1
- Datasets 2.1.0
- Tokenizers 0.11.6
|
iamholmes/english-phrases-bible | iamholmes | 2022-04-27T19:48:58Z | 69 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-04-27T19:48:50Z | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/msmarco-distilbert-base-tas-b
This is a port of the [DistilBert TAS-B Model](https://huggingface.co/sebastian-hofstaetter/distilbert-dot-tas_b-b256-msmarco) to [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and is optimized for the task of semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/msmarco-distilbert-base-tas-b')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#CLS Pooling - Take output from first token
def cls_pooling(model_output):
return model_output.last_hidden_state[:,0]
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = cls_pooling(model_output)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-distilbert-base-tas-b")
model = AutoModel.from_pretrained("sentence-transformers/msmarco-distilbert-base-tas-b")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilbert-base-tas-b)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Have a look at: [DistilBert TAS-B Model](https://huggingface.co/sebastian-hofstaetter/distilbert-dot-tas_b-b256-msmarco) |
princeton-nlp/efficient_mlm_m0.15-801010 | princeton-nlp | 2022-04-27T18:54:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.08005",
"autotrain_compatible",
"region:us"
] | fill-mask | 2022-04-22T18:45:04Z | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
princeton-nlp/efficient_mlm_m0.40 | princeton-nlp | 2022-04-27T18:54:13Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.08005",
"autotrain_compatible",
"region:us"
] | fill-mask | 2022-04-22T18:44:55Z | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
LiYuan/amazon-cross-encoder | LiYuan | 2022-04-27T18:36:36Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-27T18:06:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-mnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8244
- Accuracy: 0.6617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8981 | 1.0 | 35702 | 0.8662 | 0.6371 |
| 0.7837 | 2.0 | 71404 | 0.8244 | 0.6617 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
HannahRoseKirk/Hatemoji | HannahRoseKirk | 2022-04-27T18:17:04Z | 30 | 4 | transformers | [
"transformers",
"pytorch",
"deberta",
"text-classification",
"hate-speech-detection",
"en",
"dataset:HatemojiBuild",
"dataset:HatemojiCheck",
"arxiv:2108.05921",
"arxiv:2012.15761",
"arxiv:2202.11176",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-13T09:36:34Z | ---
license: cc-by-4.0
language:
- en
tags:
- text-classification
- pytorch
- hate-speech-detection
datasets:
- HatemojiBuild
- HatemojiCheck
metrics:
- Accuracy, F1 Score
---
# Hatemoji Model
## Model description
This model is a fine-tuned version of the [DeBERTa base model](https://huggingface.co/microsoft/deberta-base). This model is cased. The model was trained on iterative rounds of adversarial data generation with human-and-model-in-the-loop. In each round, annotators are tasked with tricking the model-in-the-loop with emoji-containing statements that it will misclassify. Between each round, the model is retrained. This is the final model from the iterative process, referred to as R8-T in our paper. The intended task is to classify an emoji-containing statement as either non-hateful (LABEL 0.0) or hateful (LABEL 1.0).
- **Github Repository:** https://github.com/HannahKirk/Hatemoji
- **HuggingFace Datasets:** [HatemojiBuild](https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild) & [HatemojiCheck](https://huggingface.co/datasets/HannahRoseKirk/HatemojiCheck)
- **Paper:** https://arxiv.org/abs/2108.05921
- **Point of Contact:** [email protected]
## Intended uses & limitations
The intended use of the model is to classify English-language, emoji-containing, short-form text documents as a binary task: non-hateful vs hateful. The model has demonstrated strengths compared to commercial and academic models on classifying emoji-based hate, but is also a strong classifier of text-only hate. Because the model was trained on synthetic, adversarially-generated data, it may have some weaknesses when it comes to empirical emoji-based hate 'in-the-wild'.
You can interact with this model on [Dynabench](https://dynabench.org/tasks/hs), and find its limitations. We hope to continue improving the model on new adversarial data to better iron out its remaining weaknesses!
## How to use
The model can be used with pipeline:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='HannahRoseKirk/Hatemoji', return_all_scores=True)
prediction = classifier("I 💜💙💚 emoji 😍", )
print(prediction)
"""
Output
[[{'label': 'LABEL_0', 'score': 0.9999157190322876}, {'label': 'LABEL_1', 'score': 8.425049600191414e-05}]]
"""
```
### Training data
The model was trained on:
* The three rounds of emoji-containing, adversarially-generated texts from [HatemojiBuild](https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild)
* The four rounds of text-only, adversarially-generated texts from Vidgen et al., (2021). _Learning from the worst: Dynamically generated datasets to improve online hate detection_. Available on [Github](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset) and explained in their [paper](https://arxiv.org/abs/2012.15761).
* A collection of widely available and publicly accessible datasets from [https://hatespeechdata.com/](hatespeechdata.com)
## Train procedure
The model was trained using HuggingFace's [run glue script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py), using the following parameters:
```
python3 transformers/examples/pytorch/text-classification/run_glue.py \
--model_name_or_path microsoft/deberta-base \
--validation_file path_to_data/dev.csv \
--train_file path_to_data/train.csv \
--do_train --do_eval --max_seq_length 512 --learning_rate 2e-5 \
--num_train_epochs 3 --evaluation_strategy epoch \
--load_best_model_at_end --output_dir path_to_outdir/deberta123/ \
--seed 123 \
--cache_dir /.cache/huggingface/transformers/ \
--overwrite_output_dir > ./log_deb 2> ./err_deb
```
We experimented with upsampling the train split of each round to improve performance with increments of [1, 5, 10, 100], with the optimum upsampling taken
forward to all subsequent rounds. The optimal upsampling ratios for R1-R4 (text rounds from Vidgen et al.,) are carried forward. This model is trained on upsampling ratios of `{'R0':1, 'R1':5, 'R2':100, 'R3':1, 'R4':1 , 'R5':100, 'R6':1, 'R7':5}`.
## Variable and metrics
We wished to train a model which could effectively encode information about emoji-based hate, without worsening performance on text-only hate. Thus, we evaluate the model on:
* [HatemojiCheck](https://huggingface.co/datasets/HannahRoseKirk/HatemojiCheck), an evaluation checklist with 7 functionalities of emoji-based hate and contrast sets
* [HateCheck](https://huggingface.co/datasets/Paul/hatecheck), an evaluation checklist contains 29 functional tests for hate speech and contrast sets.
* The held-out tests sets from [HatemojiBuild](https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild) the three round of adversarially-generated data collection with emoji-containing examples (R5-7). Available on Huuggingface
* The held-out test sets from the four rounds of adversarially-generated data collection with text-only examples (R1-4, from [Vidgen et al.](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset))
For the round-specific test sets, we used a weighted F1-score across them to choose the final model in each round. For more details, see our [paper](https://arxiv.org/abs/2108.05921)
## Evaluation results
We compare our model to:
* **P-IA**: the identity attack attribute from Perspective API
* **P-TX**: the toxicity attribute from Perspective API
* **B-D**: A BERT model trained on the [Davidson et al. (2017)](https://github.com/t-davidson/hate-speech-and-offensive-language) dataset
* **B-F**: A BERT model trained on the [Founta et al. (2018)](https://github.com/ENCASEH2020/hatespeech-twitter) dataset
| | **Emoji Test Sets** | | | | **Text Test Sets** | | | | **All Rounds** | |
| :------- | :-----------------: | :--------: | :------------: | :--------: | :----------------: | :--------: | :-----------: | :--------: | :------------: | :--------: |
| | **R5-R7** | | **HmojiCheck** | | **R1-R4** | | **HateCheck** | | **R1-R7** | |
| | **Acc** | **F1** | **Acc** | **F1** | **Acc** | **F1** | **Acc** | **F1** | **Acc** | **F1** |
| **P-IA** | 0\.508 | 0\.394 | 0\.689 | 0\.754 | 0\.679 | 0\.720 | 0\.765 | 0\.839 | 0\.658 | 0\.689 |
| **P-TX** | 0\.523 | 0\.448 | 0\.650 | 0\.711 | 0\.602 | 0\.659 | 0\.720 | 0\.813 | 0\.592 | 0\.639 |
| **B-D** | 0\.489 | 0\.270 | 0\.578 | 0\.636 | 0\.589 | 0\.607 | 0\.632 | 0\.738 | 0\.591 | 0\.586 |
| **B-F** | 0\.496 | 0\.322 | 0\.552 | 0\.605 | 0\.562 | 0\.562 | 0\.602 | 0\.694 | 0\.557 | 0\.532 |
| **Hatemoji** | **0\.744** | **0\.755** | **0\.871** | **0\.904** | **0\.827** | **0\.844** | **0\.966** | **0\.975** | **0\.814** | **0\.829** |
For full discussion of the model results, see our [paper](https://arxiv.org/abs/2108.05921).
A recent [paper](https://arxiv.org/pdf/2202.11176.pdf) by Lees et al., (2022) _A New Generation of Perspective API:Efficient Multilingual Character-level Transformers_ beats this model on the HatemojiCheck benchmark. |
obokkkk/mbart-large-cc25-finetuned-en-to-ko2 | obokkkk | 2022-04-27T17:49:20Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-27T15:00:41Z | ---
tags:
- generated_from_trainer
model-index:
- name: mbart-large-cc25-finetuned-en-to-ko2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-finetuned-en-to-ko2
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Subsets and Splits