modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 06:27:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 06:27:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ali2066/finetuned_token_itr0_0.0002_editorials_16_02_2022-21_07_38 | ali2066 | 2022-02-16T20:08:50Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_0.0002_editorials_16_02_2022-21_07_38
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_0.0002_editorials_16_02_2022-21_07_38
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1146
- Precision: 0.4662
- Recall: 0.4718
- F1: 0.4690
- Accuracy: 0.9773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.0756 | 0.2960 | 0.4505 | 0.3573 | 0.9775 |
| No log | 2.0 | 30 | 0.0626 | 0.3615 | 0.4231 | 0.3899 | 0.9808 |
| No log | 3.0 | 45 | 0.0602 | 0.4898 | 0.5275 | 0.5079 | 0.9833 |
| No log | 4.0 | 60 | 0.0719 | 0.5517 | 0.5275 | 0.5393 | 0.9849 |
| No log | 5.0 | 75 | 0.0754 | 0.5765 | 0.5385 | 0.5568 | 0.9849 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_2e-05_editorials_16_02_2022-21_05_05 | ali2066 | 2022-02-16T20:06:17Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_2e-05_editorials_16_02_2022-21_05_05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_2e-05_editorials_16_02_2022-21_05_05
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1114
- Precision: 0.0637
- Recall: 0.0080
- F1: 0.0141
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.0921 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
| No log | 2.0 | 30 | 0.0816 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
| No log | 3.0 | 45 | 0.0781 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
| No log | 4.0 | 60 | 0.0746 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
| No log | 5.0 | 75 | 0.0737 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_2e-05_essays_16_02_2022-21_01_51 | ali2066 | 2022-02-16T20:02:54Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_2e-05_essays_16_02_2022-21_01_51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_2e-05_essays_16_02_2022-21_01_51
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2525
- Precision: 0.3997
- Recall: 0.5117
- F1: 0.4488
- Accuracy: 0.9115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.4652 | 0.1528 | 0.3588 | 0.2144 | 0.7851 |
| No log | 2.0 | 22 | 0.3646 | 0.2913 | 0.4847 | 0.3639 | 0.8521 |
| No log | 3.0 | 33 | 0.3453 | 0.3789 | 0.5611 | 0.4523 | 0.8708 |
| No log | 4.0 | 44 | 0.3270 | 0.3673 | 0.5496 | 0.4404 | 0.8729 |
| No log | 5.0 | 55 | 0.3268 | 0.4011 | 0.5725 | 0.4717 | 0.8760 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_3e-05_webDiscourse_16_02_2022-20_59_50 | ali2066 | 2022-02-16T20:00:45Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_3e-05_webDiscourse_16_02_2022-20_59_50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_3e-05_webDiscourse_16_02_2022-20_59_50
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5450
- Precision: 0.0049
- Recall: 0.0146
- F1: 0.0074
- Accuracy: 0.7431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.6830 | 0.0109 | 0.0323 | 0.0163 | 0.5685 |
| No log | 2.0 | 20 | 0.7187 | 0.0256 | 0.0323 | 0.0286 | 0.5668 |
| No log | 3.0 | 30 | 0.6839 | 0.0076 | 0.0484 | 0.0131 | 0.5848 |
| No log | 4.0 | 40 | 0.6988 | 0.0092 | 0.0484 | 0.0155 | 0.5918 |
| No log | 5.0 | 50 | 0.7055 | 0.0100 | 0.0484 | 0.0165 | 0.5946 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_2e-05_webDiscourse_16_02_2022-20_58_45 | ali2066 | 2022-02-16T19:59:45Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_2e-05_webDiscourse_16_02_2022-20_58_45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_2e-05_webDiscourse_16_02_2022-20_58_45
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6373
- Precision: 0.0024
- Recall: 0.0072
- F1: 0.0036
- Accuracy: 0.6329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 10 | 0.5913 | 0.0 | 0.0 | 0.0 | 0.7023 |
| No log | 2.0 | 20 | 0.5833 | 0.0 | 0.0 | 0.0 | 0.7062 |
| No log | 3.0 | 30 | 0.5717 | 0.0 | 0.0 | 0.0 | 0.7059 |
| No log | 4.0 | 40 | 0.5696 | 0.0 | 0.0 | 0.0 | 0.7008 |
| No log | 5.0 | 50 | 0.5669 | 0.0 | 0.0 | 0.0 | 0.7010 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
arampacha/wav2vec2-xls-r-300m-hy-cv | arampacha | 2022-02-16T19:45:37Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hy",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- hy-AM
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- hy
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HY-AM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5891
- Wer: 0.6569
**Note**: If you aim for best performance use [this model](https://huggingface.co/arampacha/wav2vec2-xls-r-300m-hy). It is trained using noizy student procedure and achieves considerably better results.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 9.167 | 16.67 | 100 | 3.5599 | 1.0 |
| 3.2645 | 33.33 | 200 | 3.1771 | 1.0 |
| 3.1509 | 50.0 | 300 | 3.1321 | 1.0 |
| 3.0757 | 66.67 | 400 | 2.8594 | 1.0 |
| 2.5274 | 83.33 | 500 | 1.5286 | 0.9797 |
| 1.6826 | 100.0 | 600 | 0.8058 | 0.7974 |
| 1.2868 | 116.67 | 700 | 0.6713 | 0.7279 |
| 1.1262 | 133.33 | 800 | 0.6308 | 0.7034 |
| 1.0408 | 150.0 | 900 | 0.6056 | 0.6745 |
| 0.9617 | 166.67 | 1000 | 0.5891 | 0.6569 |
| 0.9196 | 183.33 | 1100 | 0.5913 | 0.6432 |
| 0.8853 | 200.0 | 1200 | 0.5924 | 0.6347 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
ali2066/finetuned_token_itr0_0.0002_all_16_02_2022-20_30_01 | ali2066 | 2022-02-16T19:32:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_0.0002_all_16_02_2022-20_30_01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_0.0002_all_16_02_2022-20_30_01
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1577
- Precision: 0.4469
- Recall: 0.5280
- F1: 0.4841
- Accuracy: 0.9513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3553 | 0.1068 | 0.0810 | 0.0922 | 0.8412 |
| No log | 2.0 | 76 | 0.2812 | 0.2790 | 0.4017 | 0.3293 | 0.8684 |
| No log | 3.0 | 114 | 0.2793 | 0.3086 | 0.4586 | 0.3689 | 0.8747 |
| No log | 4.0 | 152 | 0.2766 | 0.3057 | 0.4190 | 0.3535 | 0.8763 |
| No log | 5.0 | 190 | 0.2805 | 0.2699 | 0.4845 | 0.3467 | 0.8793 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_2e-05_all_16_02_2022-20_25_06 | ali2066 | 2022-02-16T19:27:31Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_2e-05_all_16_02_2022-20_25_06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_2e-05_all_16_02_2022-20_25_06
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1778
- Precision: 0.3270
- Recall: 0.3348
- F1: 0.3309
- Accuracy: 0.9439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.4023 | 0.1050 | 0.2331 | 0.1448 | 0.8121 |
| No log | 2.0 | 76 | 0.3629 | 0.1856 | 0.3414 | 0.2405 | 0.8368 |
| No log | 3.0 | 114 | 0.3329 | 0.1794 | 0.3594 | 0.2394 | 0.8504 |
| No log | 4.0 | 152 | 0.3261 | 0.1786 | 0.3684 | 0.2405 | 0.8503 |
| No log | 5.0 | 190 | 0.3244 | 0.1872 | 0.3684 | 0.2482 | 0.8534 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_3e-05_all_16_02_2022-20_12_04 | ali2066 | 2022-02-16T19:14:21Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_3e-05_all_16_02_2022-20_12_04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_3e-05_all_16_02_2022-20_12_04
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1620
- Precision: 0.3509
- Recall: 0.3793
- F1: 0.3646
- Accuracy: 0.9468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.2997 | 0.1125 | 0.2057 | 0.1454 | 0.8669 |
| No log | 2.0 | 76 | 0.2620 | 0.1928 | 0.2849 | 0.2300 | 0.8899 |
| No log | 3.0 | 114 | 0.2497 | 0.1923 | 0.2906 | 0.2314 | 0.8918 |
| No log | 4.0 | 152 | 0.2474 | 0.1819 | 0.3377 | 0.2365 | 0.8905 |
| No log | 5.0 | 190 | 0.2418 | 0.2128 | 0.3264 | 0.2576 | 0.8997 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
AWTStress/stress_score | AWTStress | 2022-02-16T18:44:04Z | 9 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: stress_score
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# stress_score
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Harveenchadha/model-entailment | Harveenchadha | 2022-02-16T16:10:23Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"nlp",
"region:us"
]
| null | 2022-03-02T23:29:04Z | ---
tags:
- nlp
library_name: keras
---
## Multimodal entailment
Author: Sayak Paul
Date created: 2021/08/08
Last modified: 2021/08/15
Description: Training a multimodal model for predicting entailment.
### What is multimodal entailment?
On social media platforms, to audit and moderate content we may want to find answers to the following questions in near real-time:
Does a given piece of information contradict the other?
Does a given piece of information imply the other?
In NLP, this task is called analyzing textual entailment. However, that's only when the information comes from text content. In practice, it's often the case the information available comes not just from text content, but from a multimodal combination of text, images, audio, video, etc. Multimodal entailment is simply the extension of textual entailment to a variety of new input modalities. |
ali2066/finetuned_token_3e-05_all_16_02_2022-16_25_56 | ali2066 | 2022-02-16T15:29:08Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_3e-05_all_16_02_2022-16_25_56
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_3e-05_all_16_02_2022-16_25_56
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1630
- Precision: 0.3684
- Recall: 0.3714
- F1: 0.3699
- Accuracy: 0.9482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3339 | 0.1075 | 0.2324 | 0.1470 | 0.8379 |
| No log | 2.0 | 76 | 0.3074 | 0.1589 | 0.2926 | 0.2060 | 0.8489 |
| No log | 3.0 | 114 | 0.2914 | 0.2142 | 0.3278 | 0.2591 | 0.8591 |
| No log | 4.0 | 152 | 0.2983 | 0.1951 | 0.3595 | 0.2529 | 0.8454 |
| No log | 5.0 | 190 | 0.2997 | 0.1851 | 0.3528 | 0.2428 | 0.8487 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
mohamed-illiyas/wav2vec2-base-lj-demo-colab | mohamed-illiyas | 2022-02-16T15:24:20Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-lj-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-lj-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7050
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| No log | 0.41 | 20 | 15.5667 | 1.0 |
| No log | 0.82 | 40 | 11.6885 | 1.0 |
| 8.569 | 1.22 | 60 | 6.0060 | 1.0 |
| 8.569 | 1.63 | 80 | 3.7050 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ali2066/finetuned_token_3e-05_all_16_02_2022-16_12_51 | ali2066 | 2022-02-16T15:16:04Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_3e-05_all_16_02_2022-16_12_51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_3e-05_all_16_02_2022-16_12_51
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1630
- Precision: 0.3684
- Recall: 0.3714
- F1: 0.3699
- Accuracy: 0.9482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3339 | 0.1075 | 0.2324 | 0.1470 | 0.8379 |
| No log | 2.0 | 76 | 0.3074 | 0.1589 | 0.2926 | 0.2060 | 0.8489 |
| No log | 3.0 | 114 | 0.2914 | 0.2142 | 0.3278 | 0.2591 | 0.8591 |
| No log | 4.0 | 152 | 0.2983 | 0.1951 | 0.3595 | 0.2529 | 0.8454 |
| No log | 5.0 | 190 | 0.2997 | 0.1851 | 0.3528 | 0.2428 | 0.8487 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_3e-05_all_16_02_2022-16_09_36 | ali2066 | 2022-02-16T15:12:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_3e-05_all_16_02_2022-16_09_36
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_3e-05_all_16_02_2022-16_09_36
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1630
- Precision: 0.3684
- Recall: 0.3714
- F1: 0.3699
- Accuracy: 0.9482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3339 | 0.1075 | 0.2324 | 0.1470 | 0.8379 |
| No log | 2.0 | 76 | 0.3074 | 0.1589 | 0.2926 | 0.2060 | 0.8489 |
| No log | 3.0 | 114 | 0.2914 | 0.2142 | 0.3278 | 0.2591 | 0.8591 |
| No log | 4.0 | 152 | 0.2983 | 0.1951 | 0.3595 | 0.2529 | 0.8454 |
| No log | 5.0 | 190 | 0.2997 | 0.1851 | 0.3528 | 0.2428 | 0.8487 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_all_16_02_2022-16_06_20 | ali2066 | 2022-02-16T15:09:31Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_all_16_02_2022-16_06_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_all_16_02_2022-16_06_20
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1750
- Precision: 0.3286
- Recall: 0.3334
- F1: 0.3310
- Accuracy: 0.9447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3355 | 0.0975 | 0.2358 | 0.1380 | 0.8361 |
| No log | 2.0 | 76 | 0.3177 | 0.1359 | 0.2709 | 0.1810 | 0.8398 |
| No log | 3.0 | 114 | 0.3000 | 0.1542 | 0.3043 | 0.2047 | 0.8471 |
| No log | 4.0 | 152 | 0.3033 | 0.1589 | 0.3060 | 0.2091 | 0.8434 |
| No log | 5.0 | 190 | 0.3029 | 0.1629 | 0.3110 | 0.2138 | 0.8447 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
philschmid/distilbert-onnx | philschmid | 2022-02-16T14:51:05Z | 57,058 | 2 | transformers | [
"transformers",
"onnx",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
language: "en"
datasets:
- squad
metrics:
- squad
license: apache-2.0
---
# ONNX Conversion of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad)
# DistilBERT base cased distilled SQuAD
This model is a fine-tune checkpoint of [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased), fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1.
This model reaches a F1 score of 87.1 on the dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).
|
ali2066/finetuned_token_2e-05_all_16_02_2022-15_48_32 | ali2066 | 2022-02-16T14:50:50Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_all_16_02_2022-15_48_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_all_16_02_2022-15_48_32
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1750
- Precision: 0.3286
- Recall: 0.3334
- F1: 0.3310
- Accuracy: 0.9447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3355 | 0.0975 | 0.2358 | 0.1380 | 0.8361 |
| No log | 2.0 | 76 | 0.3177 | 0.1359 | 0.2709 | 0.1810 | 0.8398 |
| No log | 3.0 | 114 | 0.3000 | 0.1542 | 0.3043 | 0.2047 | 0.8471 |
| No log | 4.0 | 152 | 0.3033 | 0.1589 | 0.3060 | 0.2091 | 0.8434 |
| No log | 5.0 | 190 | 0.3029 | 0.1629 | 0.3110 | 0.2138 | 0.8447 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_all_16_02_2022-15_46_07 | ali2066 | 2022-02-16T14:48:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_all_16_02_2022-15_46_07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_all_16_02_2022-15_46_07
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1750
- Precision: 0.3286
- Recall: 0.3334
- F1: 0.3310
- Accuracy: 0.9447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3355 | 0.0975 | 0.2358 | 0.1380 | 0.8361 |
| No log | 2.0 | 76 | 0.3177 | 0.1359 | 0.2709 | 0.1810 | 0.8398 |
| No log | 3.0 | 114 | 0.3000 | 0.1542 | 0.3043 | 0.2047 | 0.8471 |
| No log | 4.0 | 152 | 0.3033 | 0.1589 | 0.3060 | 0.2091 | 0.8434 |
| No log | 5.0 | 190 | 0.3029 | 0.1629 | 0.3110 | 0.2138 | 0.8447 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_all_16_02_2022-15_41_15 | ali2066 | 2022-02-16T14:43:38Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_all_16_02_2022-15_41_15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_all_16_02_2022-15_41_15
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1742
- Precision: 0.3447
- Recall: 0.3410
- F1: 0.3428
- Accuracy: 0.9455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3692 | 0.0868 | 0.2030 | 0.1216 | 0.8238 |
| No log | 2.0 | 76 | 0.3198 | 0.1674 | 0.3029 | 0.2157 | 0.8567 |
| No log | 3.0 | 114 | 0.3156 | 0.1520 | 0.3096 | 0.2039 | 0.8510 |
| No log | 4.0 | 152 | 0.3129 | 0.1753 | 0.3266 | 0.2281 | 0.8500 |
| No log | 5.0 | 190 | 0.3038 | 0.1716 | 0.3401 | 0.2281 | 0.8595 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
marcopost-it/biobert-it | marcopost-it | 2022-02-16T14:15:27Z | 153 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | Hi!
This model has been trained on Italian biomedical data.
For further information, do not hesitate to send me a message! ;)
[email protected] (Marco Postiglione) |
ali2066/finetuned_token_2e-05_16_02_2022-14_30_32 | ali2066 | 2022-02-16T13:32:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_30_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_30_32
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-14_28_10 | ali2066 | 2022-02-16T13:30:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_28_10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_28_10
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-14_23_23 | ali2066 | 2022-02-16T13:25:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_23_23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_23_23
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-14_15_41 | ali2066 | 2022-02-16T13:18:14Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_15_41
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_15_41
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1746
- Precision: 0.3191
- Recall: 0.3382
- F1: 0.3284
- Accuracy: 0.9439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.2908 | 0.1104 | 0.1905 | 0.1398 | 0.8731 |
| No log | 2.0 | 76 | 0.2253 | 0.1682 | 0.3206 | 0.2206 | 0.9114 |
| No log | 3.0 | 114 | 0.2041 | 0.2069 | 0.3444 | 0.2585 | 0.9249 |
| No log | 4.0 | 152 | 0.1974 | 0.2417 | 0.3603 | 0.2894 | 0.9269 |
| No log | 5.0 | 190 | 0.1958 | 0.2707 | 0.3683 | 0.3120 | 0.9299 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
NeonBohdan/tts-tacotron2-ljspeech-pl | NeonBohdan | 2022-02-16T12:18:17Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:04Z | ---
license: apache-2.0
---
|
joe5campbell/BERT_Tweet_Sentiment_TEST | joe5campbell | 2022-02-16T11:03:42Z | 7 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: BERT_Tweet_Sentiment_TEST
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_TEST
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5541
- Train Accuracy: 0.9375
- Validation Loss: 0.6546
- Validation Accuracy: 1.0
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6902 | 0.625 | 0.6677 | 1.0 | 0 |
| 0.5541 | 0.9375 | 0.6546 | 1.0 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
chaitanya97/wav2vec2-large-xls-r-300m-turkish-colab | chaitanya97 | 2022-02-16T10:38:44Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 33.1265
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 21.4247 | 4.0 | 4 | 33.1265 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
joe5campbell/BERT_Tweet_Sentiment_100_2epochs | joe5campbell | 2022-02-16T10:34:00Z | 7 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: BERT_Tweet_Sentiment_100_2epochs
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_100_2epochs
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6279
- Train Accuracy: 0.6824
- Validation Loss: 0.7791
- Validation Accuracy: 0.2667
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.7045 | 0.4882 | 0.7236 | 0.2667 | 0 |
| 0.6279 | 0.6824 | 0.7791 | 0.2667 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
premrawat/en_ner_skills | premrawat | 2022-02-16T09:14:23Z | 6 | 5 | spacy | [
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_ner_skills
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.3980582524
- name: NER Recall
type: recall
value: 0.3404507711
- name: NER F Score
type: f_score
value: 0.3670076726
---
| Feature | Description |
| --- | --- |
| **Name** | `en_ner_skills` |
| **Version** | `0.1.0` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `SKILL` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 36.70 |
| `ENTS_P` | 39.81 |
| `ENTS_R` | 34.05 |
| `TOK2VEC_LOSS` | 607659.90 |
| `NER_LOSS` | 491709.76 | |
Minowa/distilbert-base-uncased-finetuned-ner | Minowa | 2022-02-16T07:09:20Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9239501818582607
- name: Recall
type: recall
value: 0.9378006488421524
- name: F1
type: f1
value: 0.9308238951809905
- name: Accuracy
type: accuracy
value: 0.9837800054013695
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0596
- Precision: 0.9240
- Recall: 0.9378
- F1: 0.9308
- Accuracy: 0.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2381 | 1.0 | 878 | 0.0707 | 0.9100 | 0.9240 | 0.9170 | 0.9805 |
| 0.0563 | 2.0 | 1756 | 0.0583 | 0.9246 | 0.9382 | 0.9314 | 0.9835 |
| 0.03 | 3.0 | 2634 | 0.0596 | 0.9240 | 0.9378 | 0.9308 | 0.9838 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jatinshah/bert-finetuned-ner | jatinshah | 2022-02-16T03:50:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9330024813895782
- name: Recall
type: recall
value: 0.9491753618310333
- name: F1
type: f1
value: 0.9410194377242012
- name: Accuracy
type: accuracy
value: 0.9861511744275033
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0599
- Precision: 0.9330
- Recall: 0.9492
- F1: 0.9410
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0852 | 1.0 | 1756 | 0.0647 | 0.9147 | 0.9345 | 0.9245 | 0.9826 |
| 0.0305 | 2.0 | 3512 | 0.0599 | 0.9333 | 0.9463 | 0.9398 | 0.9858 |
| 0.0212 | 3.0 | 5268 | 0.0599 | 0.9330 | 0.9492 | 0.9410 | 0.9862 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-01_55_54 | ali2066 | 2022-02-16T01:18:01Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-01_55_54
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-01_55_54
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-01_30_30 | ali2066 | 2022-02-16T00:32:55Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-01_30_30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-01_30_30
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1748
- Precision: 0.3384
- Recall: 0.3492
- F1: 0.3437
- Accuracy: 0.9442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3180 | 0.0985 | 0.1648 | 0.1233 | 0.8643 |
| No log | 2.0 | 76 | 0.2667 | 0.1962 | 0.2698 | 0.2272 | 0.8926 |
| No log | 3.0 | 114 | 0.2374 | 0.2268 | 0.3005 | 0.2585 | 0.9062 |
| No log | 4.0 | 152 | 0.2305 | 0.2248 | 0.3247 | 0.2657 | 0.9099 |
| No log | 5.0 | 190 | 0.2289 | 0.2322 | 0.3166 | 0.2679 | 0.9102 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ncats/EpiExtract4GARD-v2 | ncats | 2022-02-16T00:08:16Z | 24 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"ncats",
"en",
"dataset:ncats/EpiSet4NER",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
language:
- en
widget:
- text: "27 patients have been diagnosed with PKU in Iceland since 1947. Incidence 1972-2008 is 1/8400 living births."
example_title: "Named Entity Recognition Ex. 1"
- text: "A retrospective epidemiological study of MPSs in Estonia was undertaken, and live-birth prevalence of MPS patients born between 1985 and 2006 was estimated. The live-birth prevalence for all MPS subtypes was found to be 4.05 per 100,000 live births, which is consistent with most other European studies. MPS II had the highest calculated incidence, with 2.16 per 100,000 live births (4.2 per 100,000 male live births)"
example_title: "Named Entity Recognition Ex. 2"
- text: "A retrospective study conducted between January 2015 and December 2020 revealed a total of 304,086 newborns have been screened in Kuwait. Six newborns were diagnosed with classic homocystinuria with an incidence of 1:50,000, which is not as high as in Qatar but higher than the global incidence."
example_title: "Named Entity Recognition Ex. 3"
tags:
- token-classification
- ncats
model-index:
- name: EpiExtract4GARD-v2
results:
- task:
name: NER
type: token-classification
metrics:
- name: Token-Level Precision
type: precision
value:
- name: Token-Level Recall
type: recall
value:
- name: Token-Level F1 Score
type: f_score
value:
- name: Token-Level Precision
type: precision
value:
- name: Token-Level Recall
type: recall
value:
- name: Token-Level F1 Score
type: f_score
value:
datasets:
- ncats/EpiSet4NER
license: other
---
## DOCUMENTATION UPDATES IN PROGRESS
## Model description
**EpiExtract4GARD-v2** is a fine-tuned [BioBERT-base-cased](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) model that is ready to use for **Named Entity Recognition** of locations (LOC), epidemiologic types (EPI), and epidemiologic rates (STAT). This model was fine-tuned on EpiSet4NER-v2 for epidemiological information from rare disease abstracts. See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. See [EpiExtract4GARD on GitHub](https://github.com/ncats/epi4GARD/tree/master/EpiExtract4GARD#epiextract4gard) for details on the entire pipeline.
#### How to use
You can use this model with the Hosted inference API to the right with this [test sentence](https://pubmed.ncbi.nlm.nih.gov/21659675/): "27 patients have been diagnosed with PKU in Iceland since 1947. Incidence 1972-2008 is 1/8400 living births."
See code below for use with Transformers *pipeline* for NER.:
~~~
from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("ncats/EpiExtract4GARD")
tokenizer = AutoTokenizer.from_pretrained("ncats/EpiExtract4GARD")
NER_pipeline = pipeline('ner', model=model, tokenizer=tokenizer,aggregation_strategy='simple')
sample = "The live-birth prevalence of mucopolysaccharidoses in Estonia. Previous studies on the prevalence of mucopolysaccharidoses (MPS) in different populations have shown considerable variations. There are, however, few data with regard to the prevalence of MPSs in Fenno-Ugric populations or in north-eastern Europe, except for a report about Scandinavian countries. A retrospective epidemiological study of MPSs in Estonia was undertaken, and live-birth prevalence of MPS patients born between 1985 and 2006 was estimated. The live-birth prevalence for all MPS subtypes was found to be 4.05 per 100,000 live births, which is consistent with most other European studies. MPS II had the highest calculated incidence, with 2.16 per 100,000 live births (4.2 per 100,000 male live births), forming 53% of all diagnosed MPS cases, and was twice as high as in other studied European populations. The second most common subtype was MPS IIIA, with a live-birth prevalence of 1.62 in 100,000 live births. With 0.27 out of 100,000 live births, MPS VI had the third-highest live-birth prevalence. No cases of MPS I were diagnosed in Estonia, making the prevalence of MPS I in Estonia much lower than in other European populations. MPSs are the third most frequent inborn error of metabolism in Estonia after phenylketonuria and galactosemia."
sample2 = "Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Kuwait is a small Arabian Gulf country with a high rate of consanguinity and where a national newborn screening program was expanded in October 2014 to include a wide range of endocrine and metabolic disorders. A retrospective study conducted between January 2015 and December 2020 revealed a total of 304,086 newborns have been screened in Kuwait. Six newborns were diagnosed with classic homocystinuria with an incidence of 1:50,000, which is not as high as in Qatar but higher than the global incidence. Molecular testing for five of them has revealed three previously reported pathogenic variants in the <i>CBS</i> gene, c.969G>A, p.(Trp323Ter); c.982G>A, p.(Asp328Asn); and the Qatari founder variant c.1006C>T, p.(Arg336Cys). This is the first study to review the screening of newborns in Kuwait for classic homocystinuria, starting with the detection of elevated blood methionine and providing a follow-up strategy for positive results, including plasma total homocysteine and amino acid analyses. Further, we have demonstrated an increase in the specificity of the current newborn screening test for classic homocystinuria by including the methionine to phenylalanine ratio along with the elevated methionine blood levels in first-tier testing. Here, we provide evidence that the newborn screening in Kuwait has led to the early detection of classic homocystinuria cases and enabled the affected individuals to lead active and productive lives."
#Sample 1 is from: Krabbi K, Joost K, Zordania R, Talvik I, Rein R, Huijmans JG, Verheijen FV, Õunap K. The live-birth prevalence of mucopolysaccharidoses in Estonia. Genet Test Mol Biomarkers. 2012 Aug;16(8):846-9. doi: 10.1089/gtmb.2011.0307. Epub 2012 Apr 5. PMID: 22480138; PMCID: PMC3422553.
#Sample 2 is from: Alsharhan H, Ahmed AA, Ali NM, Alahmad A, Albash B, Elshafie RM, Alkanderi S, Elkazzaz UM, Cyril PX, Abdelrahman RM, Elmonairy AA, Ibrahim SM, Elfeky YME, Sadik DI, Al-Enezi SD, Salloum AM, Girish Y, Al-Ali M, Ramadan DG, Alsafi R, Al-Rushood M, Bastaki L. Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Int J Neonatal Screen. 2021 Aug 17;7(3):56. doi: 10.3390/ijns7030056. PMID: 34449519; PMCID: PMC8395821.
NER_pipeline(sample)
NER_pipeline(sample2)
~~~
Or if you download [*classify_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/classify_abs.py), [*extract_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/extract_abs.py), and [*gard-id-name-synonyms.json*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/gard-id-name-synonyms.json) from GitHub then you can test with this [*additional* code](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/Case%20Study.ipynb):
~~~
import pandas as pd
import extract_abs
import classify_abs
pd.set_option('display.max_colwidth', None)
NER_pipeline = extract_abs.init_NER_pipeline()
GARD_dict, max_length = extract_abs.load_GARD_diseases()
nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer = classify_abs.init_classify_model()
def search(term,num_results = 50):
return extract_abs.search_term_extraction(term, num_results, NER_pipeline, GARD_dict, max_length,nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer)
a = search(7058)
a
b = search('Santos Mateus Leal syndrome')
b
c = search('Fellman syndrome')
c
d = search('GARD:0009941')
d
e = search('Homocystinuria')
e
~~~
#### Limitations and bias
## Training data
It was trained on [EpiSet4NER](https://huggingface.co/datasets/ncats/EpiSet4NER). See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
---------|--------------
O |Outside of a named entity
B-LOC | Beginning of a location
I-LOC | Inside of a location
B-EPI | Beginning of an epidemiologic type (e.g. "incidence", "prevalence", "occurrence")
I-EPI | Epidemiologic type that is not the beginning token.
B-STAT | Beginning of an epidemiologic rate
I-STAT | Inside of an epidemiologic rate
+More | Description pending
### EpiSet Statistics
Beyond any limitations due to the EpiSet4NER dataset, this model is limited in numeracy due to BERT-based model's use of subword embeddings, which is crucial for epidemiologic rate identification and limits the entity-level results. Recent techniques in numeracy could be used to improve the performance of the model without improving the underlying dataset.
## Training procedure
This model was trained on a [AWS EC2 p3.2xlarge](https://aws.amazon.com/ec2/instance-types/), which utilized a single Tesla V100 GPU, with these hyperparameters:
4 epochs of training (AdamW weight decay = 0.05) with a batch size of 16. Maximum sequence length = 192. Model was fed one sentence at a time.
<!--- Full config [here](https://wandb.ai/wzkariampuzha/huggingface/runs/353prhts/files/config.yaml). --->
<!--- THIS IS NOT THE UPDATED RESULTS --->
<!--- ## Hold-out validation results --->
<!--- metric| entity-level result --->
<!--- -|- --->
<!--- f1 | 83.8 --->
<!--- precision | 83.2 --->
<!--- recall | 84.5 --->
<!--- ## Test results --->
<!--- | Dataset for Model Training | Evaluation Level | Entity | Precision | Recall | F1 | --->
<!--- |:--------------------------:|:----------------:|:------------------:|:---------:|:------:|:-----:| --->
<!--- | EpiSet | Entity-Level | Overall | 0.556 | 0.662 | 0.605 | --->
<!--- | | | Location | 0.661 | 0.696 | 0.678 | --->
<!--- | | | Epidemiologic Type | 0.854 | 0.911 | 0.882 | --->
<!--- | | | Epidemiologic Rate | 0.143 | 0.218 | 0.173 | --->
<!--- | | Token-Level | Overall | 0.811 | 0.713 | 0.759 | --->
<!--- | | | Location | 0.949 | 0.742 | 0.833 | --->
<!--- | | | Epidemiologic Type | 0.9 | 0.917 | 0.908 | --->
<!--- | | | Epidemiologic Rate | 0.724 | 0.636 | 0.677 | --->
Thanks to [@William Kariampuzha](https://github.com/wzkariampuzha) at Axle Informatics/NCATS for contributing this model. |
vxvxx/t5-small-finetuned-no_paragraph-to-paragraph | vxvxx | 2022-02-15T23:01:34Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small-finetuned-no_paragraph-to-paragraph
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-no_paragraph-to-paragraph
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0713
- Bleu: 0.0
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|
| 0.767 | 1.0 | 576 | 0.0713 | 0.0 | 19.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingartists/led-zeppelin | huggingartists | 2022-02-15T22:19:29Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/led-zeppelin",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/led-zeppelin
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e4763bba12e6411077a3e573cd290da0.433x433x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Led Zeppelin</div>
<a href="https://genius.com/artists/led-zeppelin">
<div style="text-align: center; font-size: 14px;">@led-zeppelin</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Led Zeppelin.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/led-zeppelin).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/led-zeppelin")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/cpexpb1w/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Led Zeppelin's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/bna2epba) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/bna2epba/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/led-zeppelin')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/led-zeppelin")
model = AutoModelWithLMHead.from_pretrained("huggingartists/led-zeppelin")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
Sourabh714/distilbert-base-uncased-finetuned-squad | Sourabh714 | 2022-02-15T20:47:37Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2188 | 1.0 | 5533 | 1.1708 |
| 0.9519 | 2.0 | 11066 | 1.1058 |
| 0.7576 | 3.0 | 16599 | 1.1573 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
espnet/roshansh_how2_asr_raw_ft_sum_valid.acc | espnet | 2022-02-15T19:51:13Z | 0 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-summarization",
"en",
"dataset:how2",
"arxiv:2110.06263",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
]
| null | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- automatic-speech-summarization
language: en
datasets:
- how2
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/roshansh_how2_asr_raw_ft_sum_valid.acc`
This model was trained by roshansh-cmu using how2 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout e6f42a9783a5d9eba0687c19417f933e890722d7
pip install -e .
cd egs2/how2/sum1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/roshansh_how2_asr_raw_ft_sum_valid.acc
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Feb 7 15:24:21 EST 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `04561cdf3b6c3bc1d51edb04c93b953759ef551d`
- Commit date: `Mon Feb 7 09:06:12 2022 -0500`
## asr_raw_ft_sum
|dataset|Snt|Wrd|ROUGE-1|ROUGE-2|ROUGE-L|METEOR|BERTScore|
|---|---|---|---|---|---|---|---|
|decode_sum_asr_model_valid.acc.best/dev5_test_sum|2127|69795|60.72|44.7|56.1|29.36|91.53|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_conformer_vid_lf.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_raw_ft_sum
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 45875
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 10
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 10
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 5000
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- exp/asr_raw_utt_conformer/valid.acc.ave_10best.pth:::ctc
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 60000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_vid_sum/train/speech_shape
- exp/asr_stats_raw_vid_sum/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_vid_sum/valid/speech_shape
- exp/asr_stats_raw_vid_sum/valid/text_shape.bpe
batch_type: length
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_2000h_sum_trim/wav.scp
- speech
- sound
- - dump/raw/tr_2000h_sum_trim/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/cv05_sum_trim/wav.scp
- speech
- sound
- - dump/raw/cv05_sum_trim/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.5
patience: 1
token_list:
- <blank>
- <unk>
- '[hes]'
- S
- ▁THE
- ▁TO
- ''''
- ▁AND
- ▁YOU
- ▁A
- ▁IT
- T
- ▁THAT
- ▁OF
- ▁I
- ▁IS
- RE
- ▁IN
- ING
- ▁WE
- M
- ▁GOING
- ▁SO
- ▁THIS
- ▁YOUR
- ▁ON
- E
- D
- ▁BE
- ▁CAN
- N
- Y
- O
- ER
- ▁HAVE
- ▁JUST
- ▁FOR
- ▁WITH
- ▁DO
- ED
- ▁ARE
- ▁WANT
- ▁UP
- R
- LL
- P
- ▁
- L
- B
- ▁IF
- C
- ▁ONE
- ▁S
- ▁OR
- A
- ▁GO
- ▁LIKE
- ▁NOW
- ▁HERE
- VE
- LE
- U
- ▁GET
- ▁WHAT
- ▁OUT
- IN
- W
- ▁C
- ▁LITTLE
- ▁THERE
- LY
- ▁AS
- ▁MAKE
- I
- ▁THEY
- ▁MY
- K
- ▁THEN
- ▁BUT
- AL
- G
- ▁ALL
- OR
- ▁BACK
- ▁NOT
- ▁ABOUT
- ▁RIGHT
- ▁OUR
- EN
- ▁SOME
- ▁DOWN
- F
- ▁WHEN
- CH
- ▁F
- ▁HOW
- AR
- ▁WILL
- ▁RE
- CK
- ▁G
- ES
- CE
- ▁TAKE
- ▁AT
- ▁FROM
- ▁WAY
- TER
- ▁SEE
- RA
- ▁USE
- ▁REALLY
- RI
- TH
- ▁TWO
- ▁ME
- ▁VERY
- ▁E
- ▁B
- AT
- ▁THEM
- ▁DON
- ▁AN
- ▁BECAUSE
- ▁MORE
- RO
- H
- 'ON'
- LI
- ▁PUT
- ▁ST
- IL
- ▁BIT
- ▁START
- ▁NEED
- ▁INTO
- UR
- ▁TIME
- ▁OVER
- ▁W
- ▁DE
- ▁LOOK
- ▁THESE
- ▁LET
- ▁GOOD
- ▁ALSO
- AN
- ▁OFF
- ▁HE
- ▁KIND
- ▁SIDE
- ▁CO
- ▁SURE
- ▁AGAIN
- ▁MA
- ▁KNOW
- IT
- ▁WOULD
- IC
- ▁OTHER
- LA
- ▁P
- ▁WHICH
- '-'
- IR
- ▁LA
- ▁HAND
- EL
- ▁LOT
- ▁WHERE
- ▁THREE
- ▁PA
- ION
- LO
- ▁KEEP
- ▁SHOW
- ▁THING
- ▁FIRST
- TE
- ENT
- ATE
- ▁COME
- AD
- ▁GOT
- NG
- ▁NICE
- ▁T
- ET
- ▁MO
- ▁ANY
- ▁ACTUALLY
- ▁DIFFERENT
- ▁SE
- GE
- ▁WORK
- ▁THROUGH
- ▁O
- KE
- V
- ▁AROUND
- ▁BA
- PE
- ▁HI
- ▁BY
- SH
- ATION
- ▁SU
- ▁CA
- ▁D
- ▁LO
- ▁HAS
- ▁LI
- ▁PLAY
- Z
- ▁ADD
- ▁RO
- ▁TA
- AS
- ▁FOUR
- ▁CON
- ▁THOSE
- MP
- NE
- ▁SP
- UT
- ▁GIVE
- ▁WELL
- ▁BALL
- TING
- RY
- X
- ▁HO
- INE
- IVE
- ▁NEXT
- ▁PO
- ▁STEP
- ▁EVEN
- TION
- ▁MI
- MENT
- ▁CUT
- ▁BO
- ▁LINE
- ▁MUCH
- ▁THINGS
- ▁TALK
- UN
- ▁PART
- ▁WAS
- ▁FA
- ▁SOMETHING
- PP
- ANCE
- ND
- DI
- ▁RA
- AGE
- ▁SAME
- ▁EXPERT
- ▁DOING
- ▁LEFT
- IST
- ▁DI
- ▁NO
- RU
- ME
- TA
- UL
- TI
- ▁VILLAGE
- DE
- ERS
- ▁PEOPLE
- ▁TURN
- VER
- ▁FL
- ▁LEG
- ▁ONCE
- ▁COLOR
- ▁PULL
- ▁USING
- VI
- ▁WATER
- ▁SHE
- ▁TOP
- ▁OKAY
- ▁ANOTHER
- ▁THEIR
- ▁SAY
- URE
- ▁HA
- ▁IMPORTANT
- ▁PIECE
- ▁FOOT
- ▁TRA
- ▁SC
- ▁BODY
- ▁SET
- ▁POINT
- ▁HELP
- ▁TODAY
- ▁BRING
- ▁V
- ▁END
- MA
- ▁CH
- ▁MOST
- ▁K
- ▁AHEAD
- ▁HER
- OL
- ▁SA
- AM
- IES
- ▁THINK
- ▁NAME
- ▁TRY
- ▁MOVE
- ONE
- ▁LE
- ▁TOO
- TO
- UM
- ▁PLACE
- ▁COULD
- ▁FIND
- ▁FIVE
- ▁ALWAYS
- ID
- TY
- NT
- ▁FEEL
- ▁HEAD
- ▁THAN
- NA
- ▁EX
- ▁EYE
- ITY
- CI
- OP
- ▁SHOULD
- ▁MIGHT
- ▁HOLD
- ▁CAR
- AND
- ▁GREAT
- ▁RI
- ▁BU
- ▁HIGH
- ▁OPEN
- ▁BEFORE
- US
- ▁FRONT
- ▁LONG
- ▁TOGETHER
- NI
- ▁HAIR
- ▁LIGHT
- ▁TEN
- ▁HIT
- EST
- OUS
- ▁PRETTY
- ▁TYPE
- IP
- CO
- ▁FINGER
- ▁JO
- ▁UN
- ▁PRO
- ▁STRAIGHT
- ▁BEHALF
- ▁TI
- ▁SIX
- ▁CLEAN
- ▁DIS
- ▁DA
- ▁POSITION
- IGHT
- ACT
- ▁CHA
- ▁PE
- GG
- AP
- ▁MEAN
- ▁COMP
- FI
- ▁KNEE
- ▁CALLED
- ▁HANDS
- ▁PRE
- ▁FORWARD
- ▁AREA
- ANT
- ▁TE
- ▁WA
- ▁AFTER
- ▁SMALL
- ▁THROW
- ▁EVERY
- ▁SHOULDER
- NC
- PER
- ▁MAYBE
- ▁ABLE
- ▁BASICALLY
- ▁AM
- ▁READY
- ▁BOTTOM
- IE
- ▁HALF
- FF
- ▁BIG
- ▁EACH
- ▁PUSH
- ▁EIGHT
- ▁NEW
- ▁DONE
- ▁MAY
- ▁GETTING
- HO
- ▁HIS
- ▁HARD
- ▁CLOSE
- ALLY
- ▁SECOND
- ▁FEET
- ICAL
- ▁JA
- ▁PAINT
- ▁LEARN
- ▁SOUND
- HE
- ▁ROLL
- ▁ONLY
- ▁DOESN
- WA
- ▁DRAW
- ▁VI
- ▁DID
- ▁SHA
- ▁CENTER
- CU
- ▁CLIP
- ▁PI
- ▁CARD
- ▁INSIDE
- ▁PERSON
- ▁STILL
- ▁MAKING
- 'NO'
- ▁EVERYTHING
- .
- ▁FUN
- ARD
- ▁REMEMBER
- ▁AWAY
- ATED
- COM
- ▁SEVEN
- ▁BEEN
- ▁MANY
- ABLE
- ▁DAY
- ▁SIT
- IZE
- ▁REAL
- ▁HIP
- ▁BASIC
- ▁KICK
- ▁TU
- ATING
- ▁STICK
- ▁FLAT
- ▁WHO
- END
- HA
- ▁EXP
- ▁PICK
- ▁MIX
- ▁TRI
- ▁BI
- ▁WHOLE
- ▁STRETCH
- ▁BOTH
- ▁PROBABLY
- CA
- ▁HIM
- ▁STRING
- ▁EDGE
- ▁BASE
- ▁COMING
- UGH
- ▁LIFT
- ▁STA
- ▁WORKING
- ▁MU
- ▁QUICK
- ▁SOMETIMES
- ▁HAPPEN
- ▁YOURSELF
- ▁TALKING
- ▁DR
- ▁TELL
- ▁ANYTHING
- ▁BRA
- ▁LOOKING
- ▁SLOW
- ▁NE
- ▁STAND
- NER
- ▁COMES
- ▁GOES
- ISE
- BE
- ▁USED
- ▁UNDER
- ▁BETWEEN
- ▁HU
- ▁CREATE
- ▁NA
- ▁USUALLY
- ▁ARM
- ▁DRY
- ▁RUN
- LING
- ▁BRUSH
- ▁COVER
- ▁HEAR
- ▁DOES
- ▁STAY
- ▁EN
- ▁FOLD
- ▁CHANGE
- ▁LAST
- ▁EASY
- ▁US
- ▁PER
- ▁FACE
- ▁EAR
- ▁TIGHT
- ▁FE
- ▁PIN
- ▁MAN
- ▁BETTER
- ▁CALL
- ▁PRI
- ▁BEST
- ▁KI
- ▁COUPLE
- ▁WHILE
- ▁SHAPE
- ▁GAME
- IV
- ▁SHOT
- ▁PAPER
- ▁OWN
- ▁ALRIGHT
- ▁HAD
- TIC
- ▁BREATH
- ▁TOOL
- '2'
- ▁ENOUGH
- ▁COURSE
- ▁SKIN
- ▁SPIN
- ▁VA
- ▁ARMS
- ▁TEA
- ▁BREAK
- ▁DOG
- ▁1
- QUE
- ▁DROP
- ▁NUMBER
- IG
- ▁RED
- ▁NOTE
- ▁WEIGHT
- WARD
- ▁PLAYING
- ▁FINISH
- ▁MINUTE
- ▁R
- ▁PRESS
- ▁EITHER
- ▁CHE
- ▁PU
- BER
- ▁FEW
- ▁SIZE
- ▁MADE
- ▁LEAVE
- ▁GA
- ▁ALREADY
- ▁GUY
- ▁FAR
- ▁HOME
- ▁BAR
- UP
- ▁GRAB
- ▁MARK
- ▁WHITE
- ▁PROPER
- ▁CAUSE
- ▁OK
- ▁ART
- HI
- ▁SORT
- ▁EXERCISE
- ▁LOWER
- PORT
- ▁PLANT
- ▁BOARD
- ▁CASE
- ▁YEAR
- CENT
- ▁DU
- ▁CHECK
- ▁WHATEVER
- ▁OIL
- ▁IDEA
- ▁SIMPLE
- ▁PRACTICE
- ▁FAST
- '0'
- ▁CONTROL
- ▁J
- ▁KEY
- ▁MIDDLE
- ▁FULL
- ▁GLASS
- ▁OUTSIDE
- ▁LOW
- ▁REST
- ▁STUFF
- ▁ACT
- ▁UNTIL
- ▁BLACK
- ▁POP
- ▁CLICK
- ▁HOLE
- ▁Z
- ▁COUNT
- ▁POT
- ▁ALLOW
- ▁HAVING
- ▁TRYING
- ▁MUSCLE
- ▁GU
- ▁BOX
- ▁NOTICE
- ▁EXAMPLE
- UND
- ▁ALONG
- FUL
- ISH
- ▁STORE
- ▁LU
- ▁FLOOR
- ▁MOVING
- ▁LARGE
- ▁STOP
- ▁PH
- ▁WALK
- '5'
- ▁QU
- ▁TECHNIQUE
- ▁SOFT
- ▁GROUND
- ▁JUMP
- ▁JU
- ▁FILL
- ▁WHY
- ▁BUY
- ▁GREEN
- ▁WALL
- ▁HEEL
- NESS
- ▁LEVEL
- ▁UNDERNEATH
- ▁PATTERN
- ▁BEHIND
- ▁OLD
- ▁TIP
- ▁COMPLETE
- ▁WON
- ▁TEACH
- ▁FIT
- ▁NECK
- ▁REMOVE
- ▁TRICK
- ▁MOVEMENT
- ▁TOWARDS
- ▁PARTICULAR
- ▁CHI
- ▁EFFECT
- J
- ▁FREE
- ▁ACROSS
- ▁BEND
- ▁SAFE
- ▁SLIDE
- ▁PROBLEM
- ▁BLOCK
- ▁PAN
- ▁NATURAL
- ▁TOUCH
- ▁CHILD
- LINE
- ▁CROSS
- ▁REASON
- '4'
- ▁POWER
- ▁APPLY
- ▁FOLLOW
- ▁DESIGN
- ▁SPACE
- ▁ORDER
- ▁WOOD
- ▁RID
- '3'
- ▁COOK
- ▁BEGIN
- ▁WATCH
- ▁STYLE
- QUA
- ▁PRODUCT
- ▁TAKING
- ▁PUTTING
- ▁EXHALE
- ▁THOUGH
- ▁DEEP
- IAN
- ▁REACH
- ▁FOOD
- ▁ALMOST
- ▁COOL
- ▁SECTION
- ▁SAID
- ▁ANGLE
- ▁MUSIC
- ▁RELAX
- ▁CORNER
- ▁DARK
- ▁CHORD
- ▁ESPECIALLY
- ▁SCALE
- ▁WARM
- ▁WITHOUT
- ▁WHEEL
- ▁SEGMENT
- ▁TABLE
- ▁BOOK
- ▁PASS
- ▁ELBOW
- ▁ROUND
- ▁INHALE
- ▁SMOOTH
- ▁ROOM
- /
- ▁NINE
- ▁SHORT
- ▁MEASURE
- ▁LESS
- ▁TWIST
- ▁BALANCE
- ▁PROCESS
- ▁SWITCH
- ▁GENERAL
- ▁CLAY
- ▁CERTAIN
- ▁NEVER
- ▁BLUE
- ▁CUP
- ▁HOUSE
- ▁EXTRA
- ▁MOTION
- ▁PRESSURE
- ▁FIRE
- ▁SIMPLY
- ▁DOUBLE
- ▁TWENTY
- ▁CATCH
- ▁BECOME
- ▁BUILD
- ▁SPEED
- ▁TRANS
- ▁DRUM
- ▁CHEST
- ▁PICTURE
- ▁LENGTH
- ▁CONTINUE
- ▁COMFORTABLE
- ▁FISH
- ▁PHOTO
- ▁LOOSE
- ▁SKI
- ▁LIFE
- ▁DEGREE
- ▁OPTION
- ▁WORD
- ▁SHARP
- ▁SHOOT
- ▁FOUND
- ▁STRONG
- ▁QUITE
- ▁THIRD
- ▁GLUE
- ▁MIND
- ▁DEFINITELY
- ▁EASIER
- GRAPH
- ▁HOOK
- ▁CLEAR
- ▁POSE
- ▁BUTTON
- ▁CHOOSE
- ▁THICK
- ▁SYSTEM
- ▁PERFECT
- ▁BEAUTIFUL
- ▁SPOT
- ▁GROW
- ▁SIGN
- ▁ELSE
- ▁CONNECT
- ▁SELECT
- ▁PUNCH
- ▁DIRECTION
- ▁WRAP
- ▁RELEASE
- QUI
- SIDE
- ▁CAREFUL
- ▁VIDEO
- ▁INSTEAD
- ▁CIRCLE
- ▁WIRE
- ▁NOSE
- ▁AMOUNT
- ▁FOCUS
- ▁NORMAL
- ▁MAJOR
- ▁WHETHER
- ▁SURFACE
- ▁THUMB
- ▁DRIVE
- ▁SCREW
- ▁POSSIBLE
- ▁OBVIOUSLY
- ▁COMMON
- ▁REGULAR
- ▁ADJUST
- ▁WIDE
- ▁BLADE
- ▁FRET
- ▁RECOMMEND
- ▁BOWL
- BOARD
- ▁IMAGE
- ▁DEPENDING
- ▁PROTECT
- ▁CLOTH
- ▁HEALTH
- ▁WRIST
- ▁CLUB
- ▁DRINK
- ▁SINCE
- ▁FRIEND
- '00'
- ▁RUNNING
- ▁ITSELF
- ▁RECORD
- ▁SWING
- ▁DIRECT
- ▁MATERIAL
- ▁YO
- ▁LEAST
- ▁EXACTLY
- ▁BEGINNING
- ▁SLIGHTLY
- ▁TREAT
- ▁CAMERA
- ▁QUARTER
- ▁WINDOW
- '8'
- ▁SOMEBODY
- ▁BURN
- ▁DEMONSTRATE
- ▁DIFFERENCE
- ▁COMPUTER
- IBLE
- ▁SHOE
- ▁PERFORM
- ▁SQUARE
- ▁CONSIDER
- ▁DRILL
- ▁TEXT
- ▁FILE
- ▁RUB
- ▁FABRIC
- ▁HUNDRED
- ▁GRIP
- ▁CHARACTER
- ▁SPECIFIC
- ▁KNOT
- ▁CURL
- ▁STITCH
- ▁BLEND
- ▁FRAME
- ▁THIRTY
- '1'
- ▁HORSE
- ▁ATTACH
- ▁GROUP
- ▁STROKE
- ▁GUITAR
- ▁APART
- ▁MACHINE
- ▁CLASS
- ▁COMB
- ▁ROOT
- ▁HELLO
- ▁ENERGY
- ▁ATTACK
- ▁CORRECT
- ▁EXTEND
- ▁MINOR
- ▁PROFESSIONAL
- ▁MONEY
- ▁STRIP
- ▁FLAVOR
- ▁EVERYBODY
- ▁RULE
- ▁DIFFICULT
- ▁PROJECT
- ▁DISCUSS
- ▁FIGURE
- ▁HOWEVER
- ▁FINAL
- ▁STRENGTH
- ▁ENTIRE
- ▁FIELD
- ▁CONTACT
- ▁SUPPORT
- ▁PALM
- ▁SERIES
- ▁ENJOY
- '6'
- ▁WORLD
- ▁DECIDE
- ▁SPEAK
- ▁SEVERAL
- ▁WRITE
- ▁PROGRAM
- ABILITY
- ▁KNIFE
- ▁PLASTIC
- ▁ORGAN
- '7'
- ▁UNDERSTAND
- ▁FIFTEEN
- ▁FLEX
- ▁INFORMATION
- ▁TWELVE
- ▁DETAIL
- ▁STRIKE
- ▁ACTUAL
- ▁SPRAY
- ▁LOCAL
- ▁MOUTH
- ▁NIGHT
- ▁VEHICLE
- ▁OPPOSITE
- ▁SCHOOL
- '9'
- ▁QUESTION
- ▁SPECIAL
- ▁BIGGER
- ▁DEVELOP
- ▁PEPPER
- ▁PREFER
- Q
- '%'
- ']'
- '['
- '&'
- ','
- _
- '#'
- '='
- '@'
- +
- '*'
- $
- '~'
- <sos/eos>
init: null
input_size: null
ctc_conf:
ignore_nan_grad: true
model_conf:
ctc_weight: 0.0
lsm_weight: 0.15
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram1000/bpe.model
non_linguistic_symbols: data/nlsyms
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
hop_length: 256
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_vid_sum/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: abs_pos
selfattention_layer_type: lf_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
attention_windows:
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
attention_dilation:
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
attention_mode: tvm
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 512
num_blocks: 6
dropout_rate: 0.15
positional_dropout_rate: 0.15
self_attention_dropout_rate: 0.15
src_attention_dropout_rate: 0.15
required:
- output_dir
- token_list
version: 0.10.0
distributed: true
```
</details>
Please cite the following paper if you use this recipe:
```BibTex
@misc{sharma2022speech,
title={Speech Summarization using Restricted Self-Attention},
author={Roshan Sharma and Shruti Palaskar and Alan W Black and Florian Metze},
year={2022},
eprint={2110.06263},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title##3={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass{cs.CL}
```
|
premrawat/en_model_ner_skills | premrawat | 2022-02-15T19:50:15Z | 6 | 4 | spacy | [
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_model_ner_skills
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.3125
- name: NER Recall
type: recall
value: 0.243902439
- name: NER F Score
type: f_score
value: 0.2739726027
---
| Feature | Description |
| --- | --- |
| **Name** | `en_model_ner_skills` |
| **Version** | `0.0.2` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `SKILL` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 27.40 |
| `ENTS_P` | 31.25 |
| `ENTS_R` | 24.39 |
| `TOK2VEC_LOSS` | 129837.25 |
| `NER_LOSS` | 1056832.41 | |
AI-Nordics/bert-large-swedish-cased | AI-Nordics | 2022-02-15T16:52:53Z | 162 | 11 | transformers | [
"transformers",
"pytorch",
"megatron-bert",
"fill-mask",
"sv",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:04Z | ---
language: sv
---
# A Swedish Bert model
## Model description
This model follows the Bert Large model architecture as implemented in [Megatron-LM framework](https://github.com/NVIDIA/Megatron-LM). It was trained with a batch size of 512 in 600k steps. The model contains following parameters:
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 340M |
| \\(n_{layers}\\) | 24 |
| \\(n_{heads}\\) | 16 |
| \\(n_{ctx}\\) | 1024 |
| \\(n_{vocab}\\) | 30592 |
## Training data
The model is pretrained on a Swedish text corpus of around 85 GB from a variety of sources as shown below.
<figure>
| Dataset | Genre | Size(GB)|
|----------------------|------|------|
| Anföranden | Politics |0.9|
|DCEP|Politics|0.6|
|DGT|Politics|0.7|
|Fass|Medical|0.6|
|Författningar|Legal|0.1|
|Web data|Misc|45.0|
|JRC|Legal|0.4|
|Litteraturbanken|Books|0.3O|
|SCAR|Misc|28.0|
|SOU|Politics|5.3|
|Subtitles|Drama|1.3|
|Wikipedia|Facts|1.8|
## Intended uses & limitations
The raw model can be used for the usual tasks of masked language modeling or next sentence prediction. It is also often fine-tuned on a downstream task to improve its performance in a specific domain/task.
<br>
<br>
## How to use
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("AI-Nordics/bert-large-swedish-cased")
model = AutoModelForMaskedLM.from_pretrained("AI-Nordics/bert-large-swedish-cased")
|
AKulk/wav2vec2-base-timit-epochs15 | AKulk | 2022-02-15T14:26:13Z | 21 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-epochs15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-epochs15
This model is a fine-tuned version of [AKulk/wav2vec2-base-timit-epochs10](https://huggingface.co/AKulk/wav2vec2-base-timit-epochs10) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
xujiacheng127/anchi-bert | xujiacheng127 | 2022-02-15T12:01:06Z | 0 | 2 | null | [
"pytorch",
"region:us"
]
| null | 2022-03-02T23:29:05Z | import json
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://api-inference.huggingface.co/models/bert-base-uncased"
def query(payload):
data = json.dumps(payload)
response = requests.request("POST", API_URL, headers=headers, data=data)
return json.loads(response.content.decode("utf-8"))
data = query({"inputs": "The answer to the universe is [MASK]."}) |
CLAck/en-km | CLAck | 2022-02-15T11:26:53Z | 39 | 3 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
tags:
- translation
---
This model translate from English to Khmer.
It is the pure fine-tuned version of MarianMT model en-zh.
This is the result after 30 epochs of pure fine-tuning of khmer language.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/en-km")
tokenizer = AutoTokenizer.from_pretrained("CLAck/en-km")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2khm>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2khm> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
``` |
CLAck/indo-mixed | CLAck | 2022-02-15T11:25:18Z | 18 | 1 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"id",
"dataset:ALT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
language:
- en
- id
tags:
- translation
license: apache-2.0
datasets:
- ALT
metrics:
- sacrebleu
---
This model is pretrained on Chinese and Indonesian languages, and fine-tuned on Indonesian language.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/indo-mixed")
tokenizer = AutoTokenizer.from_pretrained("CLAck/indo-mixed")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2indo>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2indo> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
MIXED
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 24.2579 |
| 2.0 | 30.6287 |
| 3.0 | 34.4417 |
| 4.0 | 36.2577 |
| 5.0 | 37.3488 |
FINETUNING
| Epoch | Bleu |
|:-----:|:-------:|
| 6.0 | 34.1676 |
| 7.0 | 35.2320 |
| 8.0 | 36.7110 |
| 9.0 | 37.3195 |
| 10.0 | 37.9461 | |
CLAck/indo-pure | CLAck | 2022-02-15T11:24:33Z | 28 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"id",
"dataset:ALT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
language:
- en
- id
tags:
- translation
license: apache-2.0
datasets:
- ALT
metrics:
- sacrebleu
---
Pure fine-tuning version of MarianMT en-zh on Indonesian Language
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/indo-pure")
tokenizer = AutoTokenizer.from_pretrained("CLAck/indo-pure")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2indo>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2indo> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 15.9336 |
| 2.0 | 28.0175 |
| 3.0 | 31.6603 |
| 4.0 | 33.9151 |
| 5.0 | 35.0472 |
| 6.0 | 35.8469 |
| 7.0 | 36.1180 |
| 8.0 | 36.6018 |
| 9.0 | 37.1973 |
| 10.0 | 37.2738 | |
jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_100_Epochs | jfarray | 2022-02-14T22:15:16Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_50_Epochs | jfarray | 2022-02-14T21:41:05Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_10_Epochs | jfarray | 2022-02-14T21:06:23Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_5_Epochs | jfarray | 2022-02-14T20:57:30Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
smmzhu/DialoGPT-small-SZ | smmzhu | 2022-02-14T20:25:36Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
tags:
- conversational
--- |
jfarray/Model_bert-base-multilingual-uncased_50_Epochs | jfarray | 2022-02-14T19:44:38Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
huggingtweets/magicrealismbot | huggingtweets | 2022-02-14T18:15:59Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/668872745329885184/67TNOs2A_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Magic Realism Bot</div>
<div style="text-align: center; font-size: 14px;">@magicrealismbot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Magic Realism Bot.
| Data | Magic Realism Bot |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 0 |
| Tweets kept | 3250 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nx0qvg7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @magicrealismbot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/9vq0074d) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/9vq0074d/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/magicrealismbot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
akshaychaudhary/distilbert-base-uncased-finetuned-cloud2-ner | akshaychaudhary | 2022-02-14T17:33:18Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-cloud2-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cloud2-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8866
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 162 | 0.7804 | 0.0 | 0.0 | 0.0 | 0.8447 |
| No log | 2.0 | 324 | 0.8303 | 0.0 | 0.0 | 0.0 | 0.8465 |
| No log | 3.0 | 486 | 0.8866 | 0.0 | 0.0 | 0.0 | 0.8453 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-large-sh | NewT5SharedHeadsSharedKeyValues | 2022-02-14T16:22:44Z | 6 | 0 | transformers | [
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- t5-new-failed
---
# Test
Hf T5: -110.35000801086426
MTF T5: -57.58127975463867
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-base-sh | NewT5SharedHeadsSharedKeyValues | 2022-02-14T16:22:41Z | 4 | 0 | transformers | [
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- t5-new-failed
---
# Test
Hf T5: -95.86687088012695
MTF T5: -67.8558578491211
|
NYTK/translation-marianmt-en-hu | NYTK | 2022-02-14T13:31:08Z | 44 | 1 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"hu",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
language:
- en
- hu
tags:
- translation
license: gpl-3.0
metrics:
- sacrebleu
- chrf
widget:
- text: "This may not make much sense to you, sir, but I'd like to ask your permission to date your daughter."
example_title: "Translation: English-Hungarian"
---
# Marian Translation model
For further models, scripts and details, see [our repository](https://github.com/nytud/machine-translation) or [our demo site](https://juniper.nytud.hu/demo/nlp). There is a description of the REST API of our service.
This model has been traind with a [MarianNMT](https://github.com/marian-nmt/marian-dev) v1.10.23; commit: 42f0b8b7 transformer-big typed environment.
This repository contains our translation model (en-hu) which were published in MSZNY 2022 conference.
- Source language: English
- Target language: Hungarian
- Pretrained on subcorpora from OPUS
- Segments: 56.837.602
## Limitations
## Results
| Model | BLEU | chrF-3 |
| ------------- | ------------- | ------------- |
| Google en-hu | 25.30 | 54.08 |
| **Marian-big-enhu** | **37.30** | **61.61** |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {laki-yang-mt,
title = {{Jobban fordítunk magyarra, mint a Google!}},
booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year = {2022},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {Laki, László and Yang, Zijian Győző},
pages = {357--372}
}
```
|
akshaychaudhary/distilbert-base-uncased-finetuned-cloud1-ner | akshaychaudhary | 2022-02-14T13:30:57Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-cloud1-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cloud1-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0074
- Precision: 0.9714
- Recall: 0.9855
- F1: 0.9784
- Accuracy: 0.9972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 166 | 0.0160 | 0.9653 | 0.9420 | 0.9535 | 0.9945 |
| No log | 2.0 | 332 | 0.0089 | 0.9623 | 0.9855 | 0.9737 | 0.9965 |
| No log | 3.0 | 498 | 0.0074 | 0.9714 | 0.9855 | 0.9784 | 0.9972 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sshasnain/wav2vec2-xls-r-300m-bangla-command-synthetic | sshasnain | 2022-02-14T08:39:07Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-300m-bangla-command-synthetic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-bangla-command-synthetic
This model is a fine-tuned version of [sshasnain/wav2vec2-xls-r-300m-bangla-command](https://huggingface.co/sshasnain/wav2vec2-xls-r-300m-bangla-command) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0254
- eval_wer: 0.4311
- eval_runtime: 2.5036
- eval_samples_per_second: 76.689
- eval_steps_per_second: 9.586
- epoch: 35.71
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DeltaHub/lora_t5-base_mrpc | DeltaHub | 2022-02-14T06:32:18Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04Z | Need to work with OpenDelta
```
from transformers import AutoModelForSeq2SeqLM
t5 = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
from opendelta import AutoDeltaModel
delta = AutoDeltaModel.from_finetuned("DeltaHub/lora_t5-base_mrpc", backbone_model=t5)
delta.log()
```
|
jatinshah/distilbert-base-uncased-finetuned-imdb | jatinshah | 2022-02-14T04:17:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7091 | 1.0 | 157 | 2.4999 |
| 2.5768 | 2.0 | 314 | 2.4239 |
| 2.5371 | 3.0 | 471 | 2.4366 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0a0+0aef44c
- Datasets 1.18.3
- Tokenizers 0.11.0
|
stellaathena/test-med | stellaathena | 2022-02-14T02:28:29Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:05Z | ---
license: apache-2.0
---
|
jfarray/Model_bert-base-multilingual-uncased_10_Epochs | jfarray | 2022-02-13T23:21:43Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
groar/gpt-neo-1.3B-finetuned-escape2 | groar | 2022-02-13T20:59:30Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-1.3B-finetuned-escape2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-1.3B-finetuned-escape2
This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jfarray/Model_all-distilroberta-v1_50_Epochs | jfarray | 2022-02-13T20:18:37Z | 9 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
huggingartists/egor-letov | huggingartists | 2022-02-13T20:16:48Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/egor-letov",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/egor-letov
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/faa3dae99bf1fe365927608fd55c745a.330x330x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Егор Летов (Egor Letov)</div>
<a href="https://genius.com/artists/egor-letov">
<div style="text-align: center; font-size: 14px;">@egor-letov</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Егор Летов (Egor Letov).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/egor-letov).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/egor-letov")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1omrcegx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Егор Летов (Egor Letov)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3lk60u9h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3lk60u9h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/egor-letov')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/egor-letov")
model = AutoModelWithLMHead.from_pretrained("huggingartists/egor-letov")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
jfarray/Model_all-distilroberta-v1_30_Epochs | jfarray | 2022-02-13T20:00:26Z | 9 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_all-distilroberta-v1_5_Epochs | jfarray | 2022-02-13T19:40:19Z | 10 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
castorini/dkrr-dpr-tqa-retriever | castorini | 2022-02-13T17:57:26Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"arxiv:2012.04584",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05Z | This model is converted from the original DKRR [repo](https://github.com/facebookresearch/FiD) and ported into Pyserini:
```
@misc{izacard2020distilling,
title={Distilling Knowledge from Reader to Retriever for Question Answering},
author={Gautier Izacard and Edouard Grave},
year={2020},
eprint={2012.04584},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
cscottp27/distilbert-base-uncased-finetuned-emotion | cscottp27 | 2022-02-13T13:19:16Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9232542847906783
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.923
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8352 | 1.0 | 250 | 0.3079 | 0.91 | 0.9086 |
| 0.247 | 2.0 | 500 | 0.2175 | 0.923 | 0.9233 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
srosy/distilbert-base-uncased-finetuned-emotion | srosy | 2022-02-13T09:39:07Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.939
- name: F1
type: f1
value: 0.9391566069722169
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1582
- Accuracy: 0.939
- F1: 0.9392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4977 | 1.0 | 1000 | 0.1919 | 0.9255 | 0.9253 |
| 0.1545 | 2.0 | 2000 | 0.1582 | 0.939 | 0.9392 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.8.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
timtarusov/distilbert-base-uncased-finetuned-emotion | timtarusov | 2022-02-13T08:48:03Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.9211076096482195
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2274
- Accuracy: 0.921
- F1: 0.9211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8308 | 1.0 | 250 | 0.3319 | 0.8955 | 0.8897 |
| 0.2516 | 2.0 | 500 | 0.2274 | 0.921 | 0.9211 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
mujeensung/roberta-base_mnli_bc | mujeensung | 2022-02-13T05:13:00Z | 23 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: roberta-base_mnli_bc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9583768461882739
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_mnli_bc
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2125
- Accuracy: 0.9584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2015 | 1.0 | 16363 | 0.1820 | 0.9470 |
| 0.1463 | 2.0 | 32726 | 0.1909 | 0.9559 |
| 0.0768 | 3.0 | 49089 | 0.2117 | 0.9585 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_50_Epochs | jfarray | 2022-02-12T21:16:09Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_30_Epochs | jfarray | 2022-02-12T21:00:41Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_10_Epochs | jfarray | 2022-02-12T20:47:55Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_5_Epochs | jfarray | 2022-02-12T20:37:59Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_distiluse-base-multilingual-cased-v1_100_Epochs | jfarray | 2022-02-12T19:45:48Z | 137 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jgammack/multi-qa-MTL-distilbert-base-uncased-40k | jgammack | 2022-02-12T14:14:47Z | 144 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jgammack/multi-qa-MTL-distilbert-base-uncased-40k
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-MTL-distilbert-base-uncased-40k')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-MTL-distilbert-base-uncased-40k')
model = AutoModel.from_pretrained('jgammack/multi-qa-MTL-distilbert-base-uncased-40k')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-MTL-distilbert-base-uncased-40k)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_distiluse-base-multilingual-cased-v1_5_Epochs | jfarray | 2022-02-12T13:43:01Z | 131 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
huggingartists/death-grips | huggingartists | 2022-02-12T08:56:17Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/death-grips",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/death-grips
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/de4ca387303c4b46007ca1072c2e57d0.600x600x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Death Grips</div>
<a href="https://genius.com/artists/death-grips">
<div style="text-align: center; font-size: 14px;">@death-grips</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Death Grips.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/death-grips).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/death-grips")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2hmeenl7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Death Grips's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/226ak5bw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/226ak5bw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/death-grips')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/death-grips")
model = AutoModelWithLMHead.from_pretrained("huggingartists/death-grips")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
HHousen/household-rooms | HHousen | 2022-02-12T06:21:05Z | 77 | 5 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-03-02T23:29:04Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: household-rooms
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8482142686843872
---
# household-rooms
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bathroom

#### bedroom

#### dining room

#### kitchen

#### living room
 |
thyagosme/bert-base-uncased-finetuned-swag | thyagosme | 2022-02-12T02:13:46Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| multiple-choice | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0438
- Accuracy: 0.7915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7708 | 1.0 | 4597 | 0.6025 | 0.7659 |
| 0.4015 | 2.0 | 9194 | 0.6287 | 0.7841 |
| 0.1501 | 3.0 | 13791 | 1.0438 | 0.7915 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/multi-qa-distilbert-base-uncased | jgammack | 2022-02-11T23:40:41Z | 141 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jgammack/multi-qa-distilbert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-distilbert-base-uncased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-distilbert-base-uncased')
model = AutoModel.from_pretrained('jgammack/multi-qa-distilbert-base-uncased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-distilbert-base-uncased)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
huggingtweets/sauce__world | huggingtweets | 2022-02-11T22:14:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/sauce__world/1644617665459/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1488960307305218049/nAFuBERK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">poolboy sauce world</div>
<div style="text-align: center; font-size: 14px;">@sauce__world</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from poolboy sauce world.
| Data | poolboy sauce world |
| --- | --- |
| Tweets downloaded | 3192 |
| Retweets | 323 |
| Short tweets | 513 |
| Tweets kept | 2356 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20dtxww4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sauce__world's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/vh9fgsnx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/vh9fgsnx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sauce__world')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BigSalmon/InformalToFormalLincoln21 | BigSalmon | 2022-02-11T21:24:42Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:04Z | Informal to Formal:
Wordy to Concise:
Fill Missing Phrase:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln21")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln21")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
````
```
infill: increasing the number of sidewalks in suburban areas will [MASK].
Translated into the Style of Abraham Lincoln: increasing the number of sidewalks in suburban areas will ( ( enhance / maximize ) community cohesion / facilitate ( communal ties / the formation of neighborhood camaraderie ) / forge neighborly relations / lend themselves to the advancement of neighborly ties / plant the seeds of community building / flower anew the bonds of friendship / invite the budding of neighborhood rapport / enrich neighborhood life ).
infill: corn fields [MASK], [MASK] visibly as one ventures beyond chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), ( manifesting themselves ) visibly as one ventures beyond chicago.
infill: the [MASK] the SAT will soon be [MASK]. [MASK] an examination undertaken on one's laptop. [MASK] will allow students to retrieve test results promptly.
Translated into the Style of Abraham Lincoln: the ( conventional form of ) the SAT will soon be ( consigned to history ). ( replacing it will be ) an examination undertaken on one's laptop. ( so doing ) will allow students to retrieve test results promptly.
infill:
```
```
***
wordy: chancing upon a linux user is a rare occurrence in the present day.
Translate into Concise Text: present-day linux users are rare.
***
wordy: an interest in classical music is becoming more and more less popular.
Translate into Concise Text: classical music appreciation is dwindling.
Translate into Concise Text: waning interest in classic music persists.
Translate into Concise Text: interest in classic music is fading.
***
wordy: the ice cream was only one dollar, but it was not a good value for the size.
Translate into Concise Text: the one dollar ice cream was overpriced for its size.
Translate into Concise Text: overpriced, the one dollar ice cream was small.
***
wordy:
``` |
microsoft/codebert-base | microsoft | 2022-02-11T19:59:44Z | 574,944 | 236 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"roberta",
"feature-extraction",
"arxiv:2002.08155",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-03-02T23:29:05Z | ## CodeBERT-base
Pretrained weights for [CodeBERT: A Pre-Trained Model for Programming and Natural Languages](https://arxiv.org/abs/2002.08155).
### Training Data
The model is trained on bi-modal data (documents & code) of [CodeSearchNet](https://github.com/github/CodeSearchNet)
### Training Objective
This model is initialized with Roberta-base and trained with MLM+RTD objective (cf. the paper).
### Usage
Please see [the official repository](https://github.com/microsoft/CodeBERT) for scripts that support "code search" and "code-to-document generation".
### Reference
1. [CodeBERT trained with Masked LM objective](https://huggingface.co/microsoft/codebert-base-mlm) (suitable for code completion)
2. 🤗 [Hugging Face's CodeBERTa](https://huggingface.co/huggingface/CodeBERTa-small-v1) (small size, 6 layers)
### Citation
```bibtex
@misc{feng2020codebert,
title={CodeBERT: A Pre-Trained Model for Programming and Natural Languages},
author={Zhangyin Feng and Daya Guo and Duyu Tang and Nan Duan and Xiaocheng Feng and Ming Gong and Linjun Shou and Bing Qin and Ting Liu and Daxin Jiang and Ming Zhou},
year={2020},
eprint={2002.08155},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
jgammack/multi-qa-SAE-distilbert-base-uncased | jgammack | 2022-02-11T19:45:37Z | 138 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jgammack/multi-qa-SAE-distilbert-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-SAE-distilbert-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-SAE-distilbert-base')
model = AutoModel.from_pretrained('jgammack/multi-qa-SAE-distilbert-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-SAE-distilbert-base)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
mbateman/distilbert-base-uncased-finetuned-squad-d5716d28 | mbateman | 2022-02-11T09:26:12Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
shahukareem/wav2vec2-xls-r-1b-dv | shahukareem | 2022-02-11T08:15:25Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"dv",
"robust-speech-event",
"model_for_talk",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- dv
- robust-speech-event
- model_for_talk
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-1b-dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: dv
metrics:
- name: Test WER
type: wer
value: 21.32
- name: Test CER
type: cer
value: 3.43
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-dv
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1702
- Wer: 0.2123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.8412 | 0.66 | 400 | 0.7160 | 0.7913 |
| 0.6832 | 1.33 | 800 | 0.3401 | 0.5268 |
| 0.4624 | 1.99 | 1200 | 0.2671 | 0.4683 |
| 0.3832 | 2.65 | 1600 | 0.2395 | 0.4410 |
| 0.3443 | 3.32 | 2000 | 0.2410 | 0.4296 |
| 0.324 | 3.98 | 2400 | 0.2302 | 0.4143 |
| 0.2934 | 4.64 | 2800 | 0.2402 | 0.4136 |
| 0.2773 | 5.31 | 3200 | 0.2134 | 0.4088 |
| 0.2638 | 5.97 | 3600 | 0.2072 | 0.4037 |
| 0.2479 | 6.63 | 4000 | 0.2036 | 0.3876 |
| 0.2424 | 7.3 | 4400 | 0.2037 | 0.3767 |
| 0.2249 | 7.96 | 4800 | 0.1959 | 0.3802 |
| 0.2169 | 8.62 | 5200 | 0.1943 | 0.3813 |
| 0.2109 | 9.29 | 5600 | 0.1944 | 0.3691 |
| 0.1991 | 9.95 | 6000 | 0.1870 | 0.3589 |
| 0.1917 | 10.61 | 6400 | 0.1834 | 0.3485 |
| 0.1862 | 11.28 | 6800 | 0.1857 | 0.3486 |
| 0.1744 | 11.94 | 7200 | 0.1812 | 0.3330 |
| 0.171 | 12.6 | 7600 | 0.1797 | 0.3436 |
| 0.1599 | 13.27 | 8000 | 0.1839 | 0.3319 |
| 0.1597 | 13.93 | 8400 | 0.1737 | 0.3385 |
| 0.1494 | 14.59 | 8800 | 0.1807 | 0.3239 |
| 0.1444 | 15.26 | 9200 | 0.1750 | 0.3155 |
| 0.1382 | 15.92 | 9600 | 0.1705 | 0.3084 |
| 0.1299 | 16.58 | 10000 | 0.1777 | 0.2999 |
| 0.1306 | 17.25 | 10400 | 0.1765 | 0.3056 |
| 0.1239 | 17.91 | 10800 | 0.1676 | 0.2864 |
| 0.1149 | 18.57 | 11200 | 0.1774 | 0.2861 |
| 0.1134 | 19.24 | 11600 | 0.1654 | 0.2699 |
| 0.1101 | 19.9 | 12000 | 0.1621 | 0.2651 |
| 0.1038 | 20.56 | 12400 | 0.1686 | 0.2610 |
| 0.1038 | 21.23 | 12800 | 0.1722 | 0.2559 |
| 0.0988 | 21.89 | 13200 | 0.1708 | 0.2486 |
| 0.0949 | 22.55 | 13600 | 0.1696 | 0.2453 |
| 0.0913 | 23.22 | 14000 | 0.1677 | 0.2424 |
| 0.0879 | 23.88 | 14400 | 0.1640 | 0.2359 |
| 0.0888 | 24.54 | 14800 | 0.1697 | 0.2347 |
| 0.0826 | 25.21 | 15200 | 0.1709 | 0.2314 |
| 0.0819 | 25.87 | 15600 | 0.1679 | 0.2256 |
| 0.0793 | 26.53 | 16000 | 0.1701 | 0.2214 |
| 0.0773 | 27.2 | 16400 | 0.1682 | 0.2176 |
| 0.0783 | 27.86 | 16800 | 0.1685 | 0.2165 |
| 0.074 | 28.52 | 17200 | 0.1688 | 0.2155 |
| 0.0753 | 29.19 | 17600 | 0.1695 | 0.2110 |
| 0.0699 | 29.85 | 18000 | 0.1702 | 0.2123 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw | espnet | 2022-02-11T06:24:00Z | 67 | 1 | espnet | [
"espnet",
"audio",
"audio-to-audio",
"dataset:chime4",
"arxiv:1804.00015",
"arxiv:2011.03706",
"license:cc-by-4.0",
"region:us"
]
| audio-to-audio | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- audio-to-audio
language:
datasets:
- chime4
license: cc-by-4.0
---
## ESPnet2 ENH model
### `espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw`
This model was trained by Wangyou Zhang using chime4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd egs2/chime4/enh1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw
```
## ENH config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_beamformer_mvdr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/enh_train_enh_beamformer_mvdr_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 35841
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 70
patience: 4
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- si_snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
unused_parameters: false
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
pretrain_path: null
init_param: []
freeze_param: []
num_iters_per_epoch: null
batch_size: 8
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_16k/train/speech_mix_shape
- exp/enh_stats_16k/train/speech_ref1_shape
- exp/enh_stats_16k/train/noise_ref1_shape
valid_shape_file:
- exp/enh_stats_16k/valid/speech_mix_shape
- exp/enh_stats_16k/valid/speech_ref1_shape
- exp/enh_stats_16k/valid/noise_ref1_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/tr05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
- - dump/raw/tr05_simu_isolated_6ch_track/noise1.scp
- noise_ref1
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dt05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/dt05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
- - dump/raw/dt05_simu_isolated_6ch_track/noise1.scp
- noise_ref1
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-08
weight_decay: 0
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.5
patience: 1
init: xavier_uniform
model_conf:
loss_type: mask_mse
mask_type: PSM^2
use_preprocessor: false
encoder: stft
encoder_conf:
n_fft: 512
hop_length: 128
separator: wpe_beamformer
separator_conf:
num_spk: 1
loss_type: mask_mse
use_wpe: false
wnet_type: blstmp
wlayers: 3
wunits: 300
wprojs: 320
wdropout_rate: 0.0
taps: 5
delay: 3
use_dnn_mask_for_wpe: true
use_beamformer: true
bnet_type: blstmp
blayers: 3
bunits: 512
bprojs: 512
badim: 320
ref_channel: 3
use_noise_mask: true
beamformer_type: mvdr_souden
bdropout_rate: 0.0
decoder: stft
decoder_conf:
n_fft: 512
hop_length: 128
required:
- output_dir
version: 0.9.7
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{li2021espnetse,
title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji},
booktitle={Proc. IEEE Spoken Language Technology Workshop (SLT)},
pages={785--792},
year={2021},
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{li2021espnetse,
title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji},
year={2020},
eprint={2011.03706},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
|
infinitejoy/wav2vec2-large-xls-r-300m-indonesian | infinitejoy | 2022-02-11T05:56:28Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"id",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- id
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-indonesian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-indonesian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - ID dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2759
- Wer: 0.3256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0387 | 4.72 | 1000 | 3.0892 | 1.0 |
| 1.7911 | 9.43 | 2000 | 0.8451 | 0.6702 |
| 1.2826 | 14.15 | 3000 | 0.4211 | 0.4166 |
| 1.1802 | 18.87 | 4000 | 0.3508 | 0.4690 |
| 1.1065 | 23.58 | 5000 | 0.3319 | 0.4662 |
| 1.0921 | 28.3 | 6000 | 0.3056 | 0.3880 |
| 1.0366 | 33.02 | 7000 | 0.2997 | 0.3665 |
| 0.9988 | 37.74 | 8000 | 0.2972 | 0.3653 |
| 0.9864 | 42.45 | 9000 | 0.2697 | 0.3371 |
| 0.9558 | 47.17 | 10000 | 0.2739 | 0.3141 |
| 0.9094 | 51.89 | 11000 | 0.2657 | 0.3533 |
| 0.9034 | 56.6 | 12000 | 0.2699 | 0.3397 |
| 0.8907 | 61.32 | 13000 | 0.2765 | 0.3470 |
| 0.8631 | 66.04 | 14000 | 0.2774 | 0.3346 |
| 0.8389 | 70.75 | 15000 | 0.2743 | 0.3365 |
| 0.8214 | 75.47 | 16000 | 0.2778 | 0.3201 |
| 0.8195 | 80.19 | 17000 | 0.2725 | 0.3286 |
| 0.7994 | 84.91 | 18000 | 0.2782 | 0.3315 |
| 0.7816 | 89.62 | 19000 | 0.2775 | 0.3363 |
| 0.7816 | 94.34 | 20000 | 0.2731 | 0.3278 |
| 0.7635 | 99.06 | 21000 | 0.2767 | 0.3259 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
lgris/WavLM-large-CORAA-pt | lgris | 2022-02-10T23:21:45Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"generated_from_trainer",
"pt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- pt
license: apache-2.0
tags:
- generated_from_trainer
- pt
model-index:
- name: WavLM-large-CORAA-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# WavLM-large-CORAA-pt
This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on [CORAA dataset](https://github.com/nilc-nlp/CORAA).
It achieves the following results on the evaluation set:
- Loss: 0.6144
- Wer: 0.3840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.04 | 1000 | 1.9230 | 0.9960 |
| 5.153 | 0.08 | 2000 | 1.3733 | 0.8444 |
| 5.153 | 0.13 | 3000 | 1.1992 | 0.7362 |
| 1.367 | 0.17 | 4000 | 1.1289 | 0.6957 |
| 1.367 | 0.21 | 5000 | 1.0357 | 0.6470 |
| 1.1824 | 0.25 | 6000 | 1.0216 | 0.6201 |
| 1.1824 | 0.29 | 7000 | 0.9338 | 0.6036 |
| 1.097 | 0.33 | 8000 | 0.9149 | 0.5760 |
| 1.097 | 0.38 | 9000 | 0.8885 | 0.5541 |
| 1.0254 | 0.42 | 10000 | 0.8678 | 0.5366 |
| 1.0254 | 0.46 | 11000 | 0.8349 | 0.5323 |
| 0.9782 | 0.5 | 12000 | 0.8230 | 0.5155 |
| 0.9782 | 0.54 | 13000 | 0.8245 | 0.5049 |
| 0.9448 | 0.59 | 14000 | 0.7802 | 0.4990 |
| 0.9448 | 0.63 | 15000 | 0.7650 | 0.4900 |
| 0.9092 | 0.67 | 16000 | 0.7665 | 0.4796 |
| 0.9092 | 0.71 | 17000 | 0.7568 | 0.4795 |
| 0.8764 | 0.75 | 18000 | 0.7403 | 0.4615 |
| 0.8764 | 0.8 | 19000 | 0.7219 | 0.4644 |
| 0.8498 | 0.84 | 20000 | 0.7180 | 0.4502 |
| 0.8498 | 0.88 | 21000 | 0.7017 | 0.4436 |
| 0.8278 | 0.92 | 22000 | 0.6992 | 0.4395 |
| 0.8278 | 0.96 | 23000 | 0.7021 | 0.4329 |
| 0.8077 | 1.0 | 24000 | 0.6892 | 0.4265 |
| 0.8077 | 1.05 | 25000 | 0.6940 | 0.4248 |
| 0.7486 | 1.09 | 26000 | 0.6767 | 0.4202 |
| 0.7486 | 1.13 | 27000 | 0.6734 | 0.4150 |
| 0.7459 | 1.17 | 28000 | 0.6650 | 0.4152 |
| 0.7459 | 1.21 | 29000 | 0.6559 | 0.4078 |
| 0.7304 | 1.26 | 30000 | 0.6536 | 0.4088 |
| 0.7304 | 1.3 | 31000 | 0.6537 | 0.4025 |
| 0.7183 | 1.34 | 32000 | 0.6462 | 0.4008 |
| 0.7183 | 1.38 | 33000 | 0.6381 | 0.3973 |
| 0.7059 | 1.42 | 34000 | 0.6266 | 0.3930 |
| 0.7059 | 1.46 | 35000 | 0.6280 | 0.3921 |
| 0.6983 | 1.51 | 36000 | 0.6248 | 0.3897 |
| 0.6983 | 1.55 | 37000 | 0.6275 | 0.3872 |
| 0.6892 | 1.59 | 38000 | 0.6199 | 0.3852 |
| 0.6892 | 1.63 | 39000 | 0.6180 | 0.3842 |
| 0.691 | 1.67 | 40000 | 0.6144 | 0.3840 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
lgris/wavlm-large-CORAA-pt-cv7 | lgris | 2022-02-10T23:16:09Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- pt
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wavlm-large-CORAA-pt-cv7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-large-CORAA-pt-cv7
This model is a fine-tuned version of [lgris/WavLM-large-CORAA-pt](https://huggingface.co/lgris/WavLM-large-CORAA-pt) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2546
- Wer: 0.2261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6029 | 0.13 | 100 | 0.3679 | 0.3347 |
| 0.5297 | 0.26 | 200 | 0.3516 | 0.3227 |
| 0.5134 | 0.39 | 300 | 0.3327 | 0.3167 |
| 0.4941 | 0.52 | 400 | 0.3281 | 0.3122 |
| 0.4816 | 0.65 | 500 | 0.3154 | 0.3102 |
| 0.4649 | 0.78 | 600 | 0.3199 | 0.3058 |
| 0.461 | 0.91 | 700 | 0.3047 | 0.2974 |
| 0.4613 | 1.04 | 800 | 0.3006 | 0.2900 |
| 0.4198 | 1.17 | 900 | 0.2951 | 0.2891 |
| 0.3864 | 1.3 | 1000 | 0.2989 | 0.2862 |
| 0.3963 | 1.43 | 1100 | 0.2932 | 0.2830 |
| 0.3953 | 1.56 | 1200 | 0.2936 | 0.2829 |
| 0.3962 | 1.69 | 1300 | 0.2952 | 0.2773 |
| 0.3811 | 1.82 | 1400 | 0.2915 | 0.2748 |
| 0.3736 | 1.95 | 1500 | 0.2839 | 0.2684 |
| 0.3507 | 2.08 | 1600 | 0.2914 | 0.2678 |
| 0.3277 | 2.21 | 1700 | 0.2895 | 0.2652 |
| 0.3344 | 2.34 | 1800 | 0.2843 | 0.2673 |
| 0.335 | 2.47 | 1900 | 0.2821 | 0.2635 |
| 0.3559 | 2.6 | 2000 | 0.2830 | 0.2599 |
| 0.3254 | 2.73 | 2100 | 0.2711 | 0.2577 |
| 0.3263 | 2.86 | 2200 | 0.2685 | 0.2546 |
| 0.3266 | 2.99 | 2300 | 0.2679 | 0.2521 |
| 0.3066 | 3.12 | 2400 | 0.2727 | 0.2526 |
| 0.2998 | 3.25 | 2500 | 0.2648 | 0.2537 |
| 0.2961 | 3.38 | 2600 | 0.2630 | 0.2519 |
| 0.3046 | 3.51 | 2700 | 0.2684 | 0.2506 |
| 0.3006 | 3.64 | 2800 | 0.2604 | 0.2492 |
| 0.2992 | 3.77 | 2900 | 0.2682 | 0.2508 |
| 0.2775 | 3.9 | 3000 | 0.2732 | 0.2440 |
| 0.2903 | 4.03 | 3100 | 0.2659 | 0.2427 |
| 0.2535 | 4.16 | 3200 | 0.2650 | 0.2433 |
| 0.2714 | 4.29 | 3300 | 0.2588 | 0.2394 |
| 0.2636 | 4.42 | 3400 | 0.2652 | 0.2434 |
| 0.2647 | 4.55 | 3500 | 0.2624 | 0.2371 |
| 0.2796 | 4.67 | 3600 | 0.2611 | 0.2373 |
| 0.2644 | 4.8 | 3700 | 0.2604 | 0.2341 |
| 0.2657 | 4.93 | 3800 | 0.2567 | 0.2331 |
| 0.2423 | 5.06 | 3900 | 0.2594 | 0.2322 |
| 0.2556 | 5.19 | 4000 | 0.2587 | 0.2323 |
| 0.2327 | 5.32 | 4100 | 0.2639 | 0.2299 |
| 0.2613 | 5.45 | 4200 | 0.2569 | 0.2310 |
| 0.2382 | 5.58 | 4300 | 0.2585 | 0.2298 |
| 0.2404 | 5.71 | 4400 | 0.2543 | 0.2287 |
| 0.2368 | 5.84 | 4500 | 0.2553 | 0.2286 |
| 0.2514 | 5.97 | 4600 | 0.2517 | 0.2279 |
| 0.2415 | 6.1 | 4700 | 0.2524 | 0.2270 |
| 0.2338 | 6.23 | 4800 | 0.2540 | 0.2265 |
| 0.219 | 6.36 | 4900 | 0.2549 | 0.2263 |
| 0.2428 | 6.49 | 5000 | 0.2546 | 0.2261 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
emre/wav2vec2-xls-r-300m-Turkish-Tr-med | emre | 2022-02-10T22:56:56Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Turkish-Tr-med
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Turkish-Tr-med
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4727
- Wer: 0.4677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8093 | 4.21 | 400 | 2.7831 | 1.0 |
| 0.9881 | 8.42 | 800 | 0.5088 | 0.6681 |
| 0.3519 | 12.63 | 1200 | 0.4496 | 0.6007 |
| 0.2436 | 16.84 | 1600 | 0.4993 | 0.5654 |
| 0.1874 | 21.05 | 2000 | 0.4793 | 0.5530 |
| 0.1561 | 25.26 | 2400 | 0.5187 | 0.5589 |
| 0.1336 | 29.47 | 2800 | 0.5135 | 0.5311 |
| 0.1163 | 33.68 | 3200 | 0.4960 | 0.5143 |
| 0.1056 | 37.89 | 3600 | 0.4795 | 0.5045 |
| 0.0959 | 42.11 | 4000 | 0.4883 | 0.4987 |
| 0.0819 | 46.32 | 4400 | 0.4799 | 0.4903 |
| 0.0756 | 50.53 | 4800 | 0.4822 | 0.4831 |
| 0.0692 | 54.74 | 5200 | 0.4621 | 0.4762 |
| 0.062 | 58.95 | 5600 | 0.4727 | 0.4677 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
ibombonato/vit-age-classifier | ibombonato | 2022-02-10T22:06:51Z | 76 | 6 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-03-02T23:29:05Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vit-age-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8364999890327454
---
# vit-age-classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). |
satyaalmasian/temporal_tagger_German_GELECTRA | satyaalmasian | 2022-02-10T15:23:51Z | 61 | 1 | transformers | [
"transformers",
"pytorch",
"electra",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | # BERT based temporal tagged
Token classifier for temporal tagging of plain text using German Gelectra model.
# Model description
GELECTRA is a transformer (ELECTRA) model pretrained on a large corpus of German data in a self-supervised fashion. We use GELECTRA for token classification to tag the tokens in text with classes (tags are from english timex3 format):
```
O -- outside of a tag
I-TIME -- inside tag of time
B-TIME -- beginning tag of time
I-DATE -- inside tag of date
B-DATE -- beginning tag of date
I-DURATION -- inside tag of duration
B-DURATION -- beginning tag of duration
I-SET -- inside tag of the set
B-SET -- beginning tag of the set
```
# Intended uses & limitations
This model is best used accompanied with code from the [repository](https://github.com/satya77/Transformer_Temporal_Tagger). Especially for inference, the direct output might be noisy and hard to decipher, in the repository we provide alignment functions and voting strategies for the final output. The repo examples the english models, the german model can be used the same way.
# How to use
you can load the model as follows:
```
tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_German_GELECTRA", use_fast=False)
model = BertForTokenClassification.from_pretrained("satyaalmasian/temporal_tagger_German_GELECTRA")
```
for inference use:
```
processed_text = tokenizer(input_text, return_tensors="pt")
result = model(**processed_text)
classification= result[0]
```
for an example with post-processing, refer to the [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
We provide a function `merge_tokens` to decipher the output.
to further fine-tune, use the `Trainer` from hugginface. An example of a similar fine-tuning can be found [here](https://github.com/satya77/Transformer_Temporal_Tagger/blob/master/run_token_classifier.py).
# Training data
For pre-training we use a large corpus of automatically annotated news articles with heideltime.
We use 2 data sources for fine-tunning. :
[Tempeval-3](https://www.cs.york.ac.uk/semeval-2013/task1/index.php%3Fid=data.html),automatically translated to gemran,
[KRAUTS dataset](https://github.com/JannikStroetgen/KRAUTS).
# Training procedure
The model is trained from publicly available checkpoints on huggingface (`deepset/gelectra-large`), with a batch size of 192. We use a learning rate of 1e-07 with an Adam optimizer and linear weight decay for pretraining.
For fine-tuning we use a batch size of 16. We use a learning rate of 5e-05 with an Adam optimizer and linear weight decay.
We fine-tune with 3 different random seeds, this version of the model is the only seed=7.
For training, we use 2 NVIDIA A100 GPUs with 40GB of memory.
|
ajaiswal1008/wav2vec2-large-xls-r-300m-hi-colab_new | ajaiswal1008 | 2022-02-10T15:11:14Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hi-colab_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-colab_new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
junnyu/roformer_base_wwm_cluecorpussmall | junnyu | 2022-02-10T12:26:39Z | 6 | 2 | transformers | [
"transformers",
"pytorch",
"roformer",
"fill-mask",
"tf2.0",
"paddlepaddle",
"zh",
"arxiv:2104.09864",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: zh
tags:
- roformer
- pytorch
- tf2.0
- paddlepaddle
widget:
- text: "今天[MASK]很好,我想去公园玩!"
---
## 介绍
Pretrained model on 13G Chinese corpus(clue corpus small). Masked language modeling(MLM) and sentence order prediction(SOP) are used as training task.
在13g的clue corpus small数据集上进行的预训练,使用了`Whole Mask LM` 和 `SOP` 任务
训练逻辑参考了这里。https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/language_model/ernie-1.0
## 训练细节:
- paddlepaddle+paddlenlp
- V100 x 4
- batch size 256
- max_seq_len 512
- max_lr 0.0001
- min_lr 0.00001
- weight_decay 0.01
- grad_clip 1.0
- 总共训练的句子```128*30w + 256*15w + 256*14.5w + 256*46.5w + 256*17w = 27648w```
- 约等于512 batch size, 100w步条件下的54%
最终loss:
```python
[2022-02-05 16:05:59,067] [ INFO] - global step 170100, loss: 2.651634932, lm_loss: 2.603405, sop_loss: 0.048229, speed: 1.06 steps/s, ips: 271.68 seqs/s, learning rate: 6.66465e-05, loss_scaling: 137438.96875, num_good_steps: 356, num_bad_steps: 0
[2022-02-05 16:07:28,227] [ INFO] - global step 170200, loss: 2.822231531, lm_loss: 2.662831, sop_loss: 0.159401, speed: 1.12 steps/s, ips: 287.13 seqs/s, learning rate: 6.66263e-05, loss_scaling: 137438.96875, num_good_steps: 59, num_bad_steps: 0
[2022-02-05 16:08:57,346] [ INFO] - global step 170300, loss: 2.710968971, lm_loss: 2.673646, sop_loss: 0.037323, speed: 1.12 steps/s, ips: 287.26 seqs/s, learning rate: 6.66061e-05, loss_scaling: 137438.96875, num_good_steps: 159, num_bad_steps: 0
[2022-02-05 16:10:26,698] [ INFO] - global step 170400, loss: 2.867662907, lm_loss: 2.619032, sop_loss: 0.248631, speed: 1.12 steps/s, ips: 286.51 seqs/s, learning rate: 6.65859e-05, loss_scaling: 137438.96875, num_good_steps: 259, num_bad_steps: 0
[2022-02-05 16:11:55,714] [ INFO] - global step 170500, loss: 3.158756495, lm_loss: 2.953678, sop_loss: 0.205079, speed: 1.12 steps/s, ips: 287.59 seqs/s, learning rate: 6.65657e-05, loss_scaling: 137438.96875, num_good_steps: 359, num_bad_steps: 0
[2022-02-05 16:13:24,869] [ INFO] - global step 170600, loss: 2.860815048, lm_loss: 2.754750, sop_loss: 0.106064, speed: 1.12 steps/s, ips: 287.14 seqs/s, learning rate: 6.65455e-05, loss_scaling: 137438.96875, num_good_steps: 33, num_bad_steps: 0
```
### tf版本
https://github.com/ZhuiyiTechnology/roformer
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## pytorch使用
```python
import torch
from transformers import RoFormerForMaskedLM, BertTokenizer
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = BertTokenizer.from_pretrained("junnyu/roformer_base_wwm_cluecorpussmall")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_base_wwm_cluecorpussmall")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 今天[天||人||气||阳||雨]很好,我[想||就||要||也||还]去公园玩。
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
SetFit/deberta-v3-large__sst2__train-8-8 | SetFit | 2022-02-10T09:59:57Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:04Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-8
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7414
- Accuracy: 0.5623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6597 | 1.0 | 3 | 0.7716 | 0.25 |
| 0.6376 | 2.0 | 6 | 0.7802 | 0.25 |
| 0.5857 | 3.0 | 9 | 0.6625 | 0.75 |
| 0.4024 | 4.0 | 12 | 0.5195 | 0.75 |
| 0.2635 | 5.0 | 15 | 0.4222 | 1.0 |
| 0.1714 | 6.0 | 18 | 0.4410 | 0.5 |
| 0.1267 | 7.0 | 21 | 0.7773 | 0.75 |
| 0.0582 | 8.0 | 24 | 0.9070 | 0.75 |
| 0.0374 | 9.0 | 27 | 0.9539 | 0.75 |
| 0.0204 | 10.0 | 30 | 1.0507 | 0.75 |
| 0.012 | 11.0 | 33 | 1.2802 | 0.5 |
| 0.0086 | 12.0 | 36 | 1.4272 | 0.5 |
| 0.0049 | 13.0 | 39 | 1.4803 | 0.5 |
| 0.0039 | 14.0 | 42 | 1.4912 | 0.5 |
| 0.0031 | 15.0 | 45 | 1.5231 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
Subsets and Splits