modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
ali2066/finetuned_token_2e-05_all_16_02_2022-15_43_42
|
ali2066
| 2022-02-16T14:46:02Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_all_16_02_2022-15_43_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_all_16_02_2022-15_43_42
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1750
- Precision: 0.3286
- Recall: 0.3334
- F1: 0.3310
- Accuracy: 0.9447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3355 | 0.0975 | 0.2358 | 0.1380 | 0.8361 |
| No log | 2.0 | 76 | 0.3177 | 0.1359 | 0.2709 | 0.1810 | 0.8398 |
| No log | 3.0 | 114 | 0.3000 | 0.1542 | 0.3043 | 0.2047 | 0.8471 |
| No log | 4.0 | 152 | 0.3033 | 0.1589 | 0.3060 | 0.2091 | 0.8434 |
| No log | 5.0 | 190 | 0.3029 | 0.1629 | 0.3110 | 0.2138 | 0.8447 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
marcopost-it/biobert-it
|
marcopost-it
| 2022-02-16T14:15:27Z | 153 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
Hi!
This model has been trained on Italian biomedical data.
For further information, do not hesitate to send me a message! ;)
[email protected] (Marco Postiglione)
|
ali2066/finetuned_token_2e-05_16_02_2022-14_37_42
|
ali2066
| 2022-02-16T13:40:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_37_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_37_42
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-14_25_47
|
ali2066
| 2022-02-16T13:28:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_25_47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_25_47
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-14_23_23
|
ali2066
| 2022-02-16T13:25:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_23_23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_23_23
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-14_20_41
|
ali2066
| 2022-02-16T13:23:18Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_20_41
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_20_41
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-14_15_41
|
ali2066
| 2022-02-16T13:18:14Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_15_41
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_15_41
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1746
- Precision: 0.3191
- Recall: 0.3382
- F1: 0.3284
- Accuracy: 0.9439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.2908 | 0.1104 | 0.1905 | 0.1398 | 0.8731 |
| No log | 2.0 | 76 | 0.2253 | 0.1682 | 0.3206 | 0.2206 | 0.9114 |
| No log | 3.0 | 114 | 0.2041 | 0.2069 | 0.3444 | 0.2585 | 0.9249 |
| No log | 4.0 | 152 | 0.1974 | 0.2417 | 0.3603 | 0.2894 | 0.9269 |
| No log | 5.0 | 190 | 0.1958 | 0.2707 | 0.3683 | 0.3120 | 0.9299 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Zohar/distilgpt2-finetuned-restaurant-reviews
|
Zohar
| 2022-02-16T12:53:21Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-restaurant-reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-restaurant-reviews
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on a subset of the Yelp restaurant reviews dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6331 | 1.0 | 2536 | 3.5280 |
| 3.5676 | 2.0 | 5072 | 3.4793 |
| 3.5438 | 3.0 | 7608 | 3.4668 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.11.0
|
joe5campbell/BERT_Tweet_Sentiment_TEST
|
joe5campbell
| 2022-02-16T11:03:42Z | 7 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: BERT_Tweet_Sentiment_TEST
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_TEST
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5541
- Train Accuracy: 0.9375
- Validation Loss: 0.6546
- Validation Accuracy: 1.0
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6902 | 0.625 | 0.6677 | 1.0 | 0 |
| 0.5541 | 0.9375 | 0.6546 | 1.0 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
chaitanya97/wav2vec2-large-xls-r-300m-turkish-colab
|
chaitanya97
| 2022-02-16T10:38:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 33.1265
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 21.4247 | 4.0 | 4 | 33.1265 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
joe5campbell/BERT_Tweet_Sentiment_100_2epochs
|
joe5campbell
| 2022-02-16T10:34:00Z | 7 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: BERT_Tweet_Sentiment_100_2epochs
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_100_2epochs
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6279
- Train Accuracy: 0.6824
- Validation Loss: 0.7791
- Validation Accuracy: 0.2667
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.7045 | 0.4882 | 0.7236 | 0.2667 | 0 |
| 0.6279 | 0.6824 | 0.7791 | 0.2667 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
premrawat/en_ner_model
|
premrawat
| 2022-02-16T09:23:12Z | 6 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_ner_model
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.3624161074
- name: NER Recall
type: recall
value: 0.384341637
- name: NER F Score
type: f_score
value: 0.3730569948
---
| Feature | Description |
| --- | --- |
| **Name** | `en_ner_model` |
| **Version** | `0.1.1` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `SKILL` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 37.31 |
| `ENTS_P` | 36.24 |
| `ENTS_R` | 38.43 |
| `TOK2VEC_LOSS` | 305790.85 |
| `NER_LOSS` | 801195.82 |
|
vxvxx/t5-small-finetuned-no_paragraph-to-yes_paragraph-2
|
vxvxx
| 2022-02-16T07:13:28Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small-finetuned-no_paragraph-to-yes_paragraph-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-no_paragraph-to-yes_paragraph-2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Bleu: 0.0
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:|
| 0.006 | 1.0 | 8081 | 0.0002 | 0.0 | 19.0 |
| 0.0032 | 2.0 | 16162 | 0.0001 | 0.0 | 19.0 |
| 0.0026 | 3.0 | 24243 | 0.0001 | 0.0 | 19.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jatinshah/bert-finetuned-ner
|
jatinshah
| 2022-02-16T03:50:43Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9330024813895782
- name: Recall
type: recall
value: 0.9491753618310333
- name: F1
type: f1
value: 0.9410194377242012
- name: Accuracy
type: accuracy
value: 0.9861511744275033
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0599
- Precision: 0.9330
- Recall: 0.9492
- F1: 0.9410
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0852 | 1.0 | 1756 | 0.0647 | 0.9147 | 0.9345 | 0.9245 | 0.9826 |
| 0.0305 | 2.0 | 3512 | 0.0599 | 0.9333 | 0.9463 | 0.9398 | 0.9858 |
| 0.0212 | 3.0 | 5268 | 0.0599 | 0.9330 | 0.9492 | 0.9410 | 0.9862 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
jkang/espnet2_librispeech_100_conformer
|
jkang
| 2022-02-16T01:05:55Z | 4 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:librispeech_100",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: noinfo
datasets:
- librispeech_100
license: cc-by-4.0
---
## ESPnet2 ASR model
### `jkang/espnet2_librispeech_100_conformer`
- This model was trained by jaekookang using librispeech_100 recipe in [espnet](https://github.com/espnet/espnet/).
- Gradio Demo: [🤗 ESPNet2 ASR Librispeech Conformer](https://huggingface.co/spaces/jkang/espnet2_asr_librispeech_100h)
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 140704c146f8beeed74973f5258379f6133dcdfb
pip install -e .
cd egs2/librispeech_100/asr1
./run.sh --skip_data_prep false --skip_train true --download_model jkang/espnet2_librispeech_100_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri Feb 11 01:42:52 KST 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `140704c146f8beeed74973f5258379f6133dcdfb`
- Commit date: `Tue Feb 8 16:06:02 2022 -0500`
- GPU: NVIDIA GeForce RTX 3090 (single GPU took: 13h)
## asr_conformer_lr2e-3_warmup15k_amp_nondeterministic
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|54402|94.5|5.1|0.4|0.7|6.3|56.6|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|50948|84.8|13.7|1.5|2.1|17.3|80.7|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|52576|94.2|5.3|0.5|0.8|6.6|57.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|52343|84.7|13.8|1.5|2.0|17.3|81.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|288456|98.2|1.1|0.8|0.7|2.5|56.6|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|265951|93.3|4.1|2.6|2.0|8.7|80.7|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|281530|98.0|1.1|0.9|0.7|2.7|57.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|272758|93.5|4.0|2.5|1.9|8.4|81.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|69558|92.0|5.0|3.0|0.7|8.7|56.6|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|64524|81.3|13.2|5.4|2.4|21.1|80.7|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|66983|91.8|5.1|3.1|0.6|8.8|57.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|66650|81.2|13.1|5.7|2.1|20.9|81.5|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_conformer_lr2e-3_warmup15k_amp_nondeterministic
ngpu: 1
seed: 2022
num_workers: 4
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 70
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: 400
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 16000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_clean_100_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_clean_100_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- ▁THE
- S
- ▁AND
- ▁OF
- ▁TO
- ▁A
- ▁IN
- ED
- ▁I
- ▁HE
- ▁WAS
- ▁THAT
- ING
- ▁IT
- ''''
- ▁HIS
- ▁HAD
- ▁WITH
- ▁YOU
- ▁FOR
- T
- ▁AS
- ▁HER
- LY
- ▁NOT
- ▁BUT
- ▁SHE
- ▁BE
- D
- E
- ▁IS
- ▁AT
- ▁ON
- ▁HIM
- ▁THEY
- ▁BY
- ▁HAVE
- Y
- ▁MY
- ▁SO
- ▁ALL
- ▁THIS
- ▁WERE
- ▁WHICH
- ▁ME
- ▁FROM
- ▁ONE
- ▁SAID
- ▁WE
- N
- ER
- ▁NO
- ▁THERE
- ▁WHEN
- ▁AN
- ▁THEIR
- ▁OR
- ▁WOULD
- ▁WHO
- ▁THEM
- R
- ▁IF
- ▁WHAT
- ▁ARE
- ▁BEEN
- ▁OUT
- ▁UP
- M
- ▁WILL
- ▁DO
- ▁MAN
- ▁COULD
- C
- ▁THEN
- ▁INTO
- ▁MORE
- ▁SOME
- ES
- P
- ▁VERY
- ▁NOW
- ▁YOUR
- ▁LITTLE
- ▁TIME
- ▁ABOUT
- ▁DID
- ▁THAN
- ▁LIKE
- ▁HAS
- L
- G
- AL
- IN
- ▁UPON
- ▁CAN
- ▁WELL
- ▁OTHER
- ▁OVER
- US
- ▁TWO
- ▁ONLY
- ▁ANY
- ▁OUR
- O
- EN
- RE
- ▁MADE
- U
- ▁AFTER
- ▁SEE
- ▁S
- ▁DOWN
- ▁BEFORE
- LL
- ST
- B
- ▁OLD
- ▁DAY
- ▁MISS
- ▁GREAT
- ▁US
- ▁KNOW
- OR
- ▁SUCH
- ▁GOOD
- ▁WAY
- A
- ▁THESE
- ▁CAME
- ▁UN
- ▁SHOULD
- ▁HOW
- ▁MISTER
- ▁GO
- ▁MUCH
- ▁WHERE
- ▁MUST
- ▁NEVER
- ▁COME
- ▁BACK
- ION
- 'ON'
- ▁LONG
- F
- ▁AGAIN
- ▁FIRST
- LE
- ▁MEN
- ▁EVEN
- NESS
- ▁MIGHT
- ▁OWN
- ▁MAY
- K
- ▁HIMSELF
- ▁SAY
- ▁JUST
- ▁THROUGH
- ▁RE
- ▁AM
- ▁ITS
- ▁WENT
- ▁THOUGHT
- ▁
- ▁DE
- ▁MAKE
- I
- ▁HAND
- ▁THINK
- ▁HOUSE
- ▁HERE
- IC
- H
- ATION
- ▁LIFE
- IT
- ▁EYES
- ▁MOST
- ▁WITHOUT
- ▁TOO
- ▁THOSE
- ABLE
- ▁EVERY
- ▁DON
- ▁MANY
- ▁AWAY
- ITY
- VE
- W
- ▁STILL
- ▁BEING
- ▁C
- ▁LAST
- ▁NIGHT
- ▁O
- ▁HEAD
- AN
- ▁FOUND
- ▁NOTHING
- ▁YOUNG
- ▁WHILE
- ▁TAKE
- ▁GET
- ▁PEOPLE
- RO
- ▁OFF
- ▁THOUGH
- EST
- ▁YET
- ▁THREE
- TH
- ▁RIGHT
- ▁UNDER
- AR
- ▁FACE
- IES
- ▁ROOM
- ▁NEW
- ▁SAW
- RA
- V
- ▁ASKED
- ▁TELL
- ERS
- ▁SAME
- MENT
- ▁HEART
- LESS
- ▁WORK
- ▁PLACE
- ▁ANOTHER
- ▁EVER
- ▁LEFT
- ▁SHALL
- ▁FATHER
- ▁PUT
- ▁ONCE
- ▁TOOK
- ▁LET
- ▁ALWAYS
- ▁SEEMED
- ▁PART
- IL
- UR
- ▁WHY
- ▁TOLD
- ▁GIVE
- ▁LOVE
- CE
- ▁MIND
- ▁LOOKED
- ▁HEARD
- ▁SOON
- ▁LOOK
- ▁MOTHER
- ▁FAR
- IVE
- ▁BECAUSE
- ▁HOME
- OUS
- ▁T
- EL
- ▁D
- ▁SOMETHING
- ▁SIDE
- ▁KING
- IS
- ATE
- ▁MOMENT
- ENT
- RY
- ▁THINGS
- ▁ST
- ▁LIGHT
- ▁FIND
- ▁GOING
- ▁THING
- ▁WORLD
- IR
- AT
- ▁WATER
- ▁END
- ▁DOOR
- ISH
- ▁KNEW
- ▁WOMAN
- ▁SIR
- ▁EACH
- RI
- ▁HAVING
- ▁AGAINST
- ▁FEW
- ▁E
- ▁BEGAN
- ▁BETTER
- ▁YES
- ▁NAME
- ▁ENOUGH
- ET
- ▁HARD
- ▁VOICE
- ▁YEARS
- ▁GOT
- ▁WHOLE
- ▁WHITE
- ▁WANT
- ▁GIRL
- ▁DONE
- ▁SEEN
- ▁HUNDRED
- ▁CALLED
- ▁BETWEEN
- ▁MORNING
- FUL
- AS
- ▁FELT
- TER
- ▁KIND
- X
- CH
- ▁HERSELF
- ANT
- ▁TOWARD
- ▁HALF
- ▁OH
- ▁AMONG
- ▁HOWEVER
- ▁TURNED
- ▁ALSO
- ▁BOTH
- ▁POOR
- ▁PERHAPS
- ▁REPLIED
- ▁COURSE
- UL
- ▁QUITE
- ▁REST
- ▁DOES
- ▁MYSELF
- NG
- LO
- ANCE
- ▁MA
- ▁SET
- ▁SMALL
- ▁B
- ▁SURE
- ▁F
- ▁GAVE
- ▁PRESENT
- ▁HIGH
- ▁ALMO
- ▁R
- CK
- ▁WHOM
- ▁NEAR
- ▁CARE
- ▁WAR
- ▁GOD
- ▁TOGETHER
- ▁SAT
- ▁SHOW
- TE
- NE
- ▁BEST
- ▁UNTIL
- ▁OPEN
- ▁W
- ▁FOUR
- ▁DEAR
- ▁HANDS
- ▁WORDS
- ▁SINCE
- ▁LAND
- ▁DIS
- MAN
- ▁ANYTHING
- ▁FEET
- ▁NEXT
- ▁GENERAL
- LING
- ▁LAY
- ▁NOR
- ▁STOOD
- ▁BLACK
- ▁POWER
- ▁BROUGHT
- Z
- IE
- ▁ROUND
- ▁BELIEVE
- ▁LARGE
- ▁ALONG
- ▁HELP
- ▁DAYS
- ▁FIVE
- ▁K
- ▁HOPE
- AM
- ▁CO
- ▁KEEP
- ▁FULL
- ▁WALK
- ▁MASTER
- ATED
- ▁NATURE
- ▁JOHN
- ▁POINT
- ▁DUR
- ▁MATTER
- ▁MONEY
- ▁CHILD
- ▁LOOKING
- ▁RATHER
- ▁AIR
- IA
- ▁P
- ▁TWENTY
- ▁FIRE
- OL
- ▁LESS
- ▁SHORT
- ▁PASSED
- ▁INDEED
- TY
- ▁CASE
- ▁WORD
- ▁WISH
- ▁COUNTRY
- LED
- ID
- ▁BOY
- ▁SOUND
- ▁FORM
- ▁CRIED
- LA
- ▁FRIEND
- TON
- ▁FACT
- ▁UNCLE
- ▁TAKEN
- ▁AL
- ▁TEN
- IAN
- ▁GONE
- ▁SEA
- ▁REASON
- TING
- ▁WHOSE
- ▁OTHERS
- AC
- ▁LI
- ▁DEATH
- ▁CERTAIN
- ▁ANSWERED
- ▁THEMSELVES
- ▁LADY
- ▁STATE
- ▁CAR
- ▁WIFE
- ▁THOUSAND
- ▁TRUE
- ▁BEHIND
- AGE
- ▁DOCTOR
- ▁FEAR
- ▁OFTEN
- OM
- ▁TILL
- ▁HA
- IOUS
- ▁AROUND
- IST
- ▁SENT
- ▁SPEAK
- ▁WOMEN
- ▁GROUND
- VER
- ENCE
- NA
- ▁TALK
- ▁CHILDREN
- TION
- CO
- MO
- ▁HEAR
- ▁ORDER
- ▁LEAVE
- ▁PRO
- ▁ALREADY
- ▁LA
- ▁FINE
- SE
- ▁BA
- PP
- ▁THUS
- AD
- ▁NEED
- ▁SIGHT
- ▁CALL
- ▁FELL
- ▁MANNER
- MP
- ▁BECAME
- UM
- ▁WATCH
- OW
- ▁FOOT
- ▁CANNOT
- ▁BODY
- ▁TOWN
- ▁LIVE
- INE
- ▁RETURNED
- ▁WONDER
- MA
- ▁G
- UT
- ▁CLOSE
- UN
- IM
- ▁ALONE
- ▁DIDN
- ▁LORD
- ▁RED
- ARY
- ▁GIVEN
- ▁SIX
- ▁EVERYTHING
- ▁DARK
- ▁DEAD
- ▁STRONG
- ▁SON
- ▁COMING
- URE
- ▁HELD
- ▁ABOVE
- ▁REALLY
- ▁BEAUTIFUL
- ▁SECOND
- ARD
- ▁EVENING
- ▁CON
- ▁HOUR
- ▁FELLOW
- ▁ROSE
- ▁PERSON
- ▁EX
- ▁CH
- ▁FORCE
- ▁MO
- ▁ARM
- ▁CAUSE
- ▁TURN
- ▁CITY
- ▁DOUBT
- ▁QUESTION
- TIC
- ▁DEEP
- ▁HAIR
- ICAL
- ▁MEAN
- ▁DI
- ▁CLEAR
- ▁SOMETIMES
- ▁STRANGE
- ▁FEEL
- ▁HO
- ▁IMP
- WARD
- AUGHT
- ▁CAPTAIN
- ▁USE
- ▁UNDERSTAND
- ▁KEPT
- ▁BR
- ▁WOOD
- ▁PRE
- ▁YEAR
- ▁TI
- ▁LEAST
- ▁BED
- ▁SA
- ▁TABLE
- ▁BECOME
- ▁FREE
- ▁FAMILY
- ME
- ▁EYE
- ▁WHETHER
- ▁MAKING
- ▁WITHIN
- ▁SORT
- ▁ANSWER
- ▁PO
- ▁SAYS
- ▁EARTH
- ▁RETURN
- ▁SUDDENLY
- ▁FRIENDS
- ▁GREEN
- ▁SUN
- ▁FAIR
- ▁TH
- ▁FALL
- ▁EITHER
- ▁BO
- ▁PRINCE
- ▁THOU
- ▁ITSELF
- ▁CHURCH
- ▁BIG
- ▁ABLE
- ▁DIFFERENT
- ▁SEVERAL
- ▁DAUGHTER
- ▁WON
- ▁WIND
- ▁BAD
- ▁LOST
- ▁READ
- ▁STORY
- ▁APPEARED
- DE
- ▁NUMBER
- ▁SP
- ▁LOW
- ▁ROAD
- ▁POSSIBLE
- ▁HUMAN
- ▁RIVER
- ▁STREET
- ▁GA
- ▁COLD
- ▁MET
- ▁ACT
- ▁BROTHER
- ▁AGE
- ▁KNOWN
- ▁CONTINUED
- ▁BRING
- ▁ILL
- ▁RUN
- ▁LAW
- ▁SUBJECT
- ▁CUT
- J
- PER
- ▁PA
- ▁TROUBLE
- ▁GLAD
- HE
- ▁SLEEP
- MEN
- ▁LATE
- ▁MEANS
- ▁ASK
- ▁REACHED
- ▁RAN
- AK
- ▁HORSE
- ▁USED
- WAY
- OP
- ▁WINDOW
- ▁SNOW
- ▁PAST
- ▁OBJECT
- ▁THEREFORE
- IONS
- ▁TREE
- ▁COMP
- ▁BLUE
- CA
- ▁VI
- ▁SIGN
- ▁EIGHTEEN
- ▁GARDEN
- ▁BUSINESS
- ▁PETER
- ▁FOLLOWED
- ▁SEEM
- ▁HOLD
- ▁HAPPY
- ▁LONGER
- ▁ACROSS
- ▁BU
- BE
- ▁ELSE
- ▁PLAY
- ▁SOUL
- ▁STAND
- ▁ARMS
- ▁SCHOOL
- ▁PRINCESS
- ▁CERTAINLY
- LT
- ▁ENGLISH
- ▁SEVEN
- ▁PER
- ▁IDEA
- ▁LE
- ▁BOOK
- ▁FEELING
- ▁HUSBAND
- ▁LINE
- PT
- THOUGH
- ▁OUGHT
- ▁RICH
- IP
- ▁VIEW
- ▁DREAM
- ▁SENSE
- ▁LO
- ▁READY
- ▁CARRIED
- ▁M
- ▁REGARD
- ▁CHANCE
- ▁WANTED
- ▁LIVED
- ▁LATER
- ▁INTEREST
- ▁EN
- ▁EFFECT
- ▁CLA
- ▁CHANGE
- ▁CA
- ▁REAL
- ▁SUPPOSE
- LES
- ▁ART
- ▁TIMES
- ▁MAR
- IF
- ▁WILD
- ▁ADDED
- ▁LETTER
- IAL
- ▁THANK
- ▁PARTY
- LAND
- ▁PAY
- ▁BREATH
- ▁TAKING
- ▁COURT
- ▁COUNT
- ILY
- ▁COMMON
- ▁PUBLIC
- ▁PURPOSE
- ▁PRETTY
- ▁TRUTH
- ▁STAY
- ▁EM
- NT
- ▁SH
- ▁REMEMBER
- ▁ENTERED
- ▁RECEIVED
- RED
- ▁SPOKE
- ▁USUAL
- ▁THY
- ▁FIGURE
- ▁LED
- ▁TREES
- ▁TRIED
- ▁FORWARD
- NED
- ▁HAT
- ▁BLOOD
- ▁BEYOND
- ▁BANK
- ▁LIVING
- ▁JOY
- ▁HOURS
- ▁ENGLAND
- ▁STONE
- VI
- GE
- ▁SWEET
- ▁POSITION
- ▁FRONT
- ▁GIRLS
- ▁VISIT
- ▁CHARACTER
- ▁SPIRIT
- ▁TA
- BO
- QUE
- QUI
- ▁OPENED
- ▁OCCASION
- ▁MEET
- ▁EIGHT
- ▁REMAIN
- ▁PASS
- TO
- ▁NORTH
- ▁SERVICE
- ▁SISTER
- ▁SE
- ▁BEAR
- ▁PLEASURE
- ▁CHIEF
- ▁FOREST
- ▁BELL
- ▁EXPERIENCE
- ▁STRUCK
- ▁CARRY
- ORY
- ▁WARM
- 'NO'
- ▁WORTH
- ▁SAYING
- ▁SILENCE
- ▁CROSS
- ▁JE
- ▁H
- ▁BEAUTY
- PH
- ▁DEAL
- KE
- ▁SECRET
- DY
- ▁MILES
- ▁LU
- ▁DOING
- ▁BOYS
- ▁CROWD
- ▁ACCOUNT
- REW
- ISM
- TI
- ▁FE
- ▁NONE
- ▁RO
- ▁NEARLY
- ▁CHA
- ▁YOUTH
- ▁CAP
- HA
- ▁BIT
- ▁LIE
- ▁ATTENTION
- ▁STANDING
- ▁STAR
- ▁RESPECT
- ▁FURTHER
- ATIONS
- ▁ROCK
- ▁BOW
- EM
- ▁EARLY
- ▁MOUTH
- ▁BOAT
- UB
- ▁IMMEDIATELY
- ▁EXCEPT
- SHIP
- ▁PICTURE
- ▁BRIGHT
- ▁WA
- ▁GREW
- ▁LEAD
- ▁CUR
- ▁TONE
- RRY
- RS
- ▁WIDE
- CHE
- ▁FORTH
- IG
- OS
- ▁NEITHER
- ▁YOURSELF
- ▁SMILE
- ▁DRESS
- ▁OPINION
- ▁HAPPENED
- ▁WAIT
- ▁SIT
- ▁SHIP
- ▁AH
- ▁DESIRE
- ▁THICK
- ▁THIRD
- ▁GRAND
- ▁FOLLOW
- ▁GATHER
- ▁HILL
- ALLY
- ▁COMPANY
- ▁CHAIR
- DER
- ▁TOP
- ▁PAR
- ▁LENGTH
- ▁THIRTY
- ▁MINE
- ▁MI
- ▁EAT
- ▁EQUAL
- ▁AFRAID
- ▁FRESH
- ▁TAIL
- ▁FILLED
- ▁SU
- ▁MINUTES
- ▁FAST
- BU
- ▁ENTER
- ▁QUEEN
- ▁UTTER
- AG
- ▁FLOOR
- ▁SHA
- DI
- ▁HEAVEN
- ▁STOPPED
- ▁GUARD
- ▁HALL
- ▁BAR
- ▁COMPLETE
- ▁NINE
- ▁WEEK
- ▁GOLD
- VA
- ▁FIFTY
- ▁BEAT
- ▁PRESS
- ▁ATTEMPT
- ▁EXCLAIMED
- DO
- ▁CONF
- ▁SEEMS
- ▁STARTED
- ▁EL
- ▁HAR
- ▁EXPRESSION
- ▁TRA
- ▁WONDERFUL
- ▁SAINT
- ▁APPEARANCE
- ▁GRAVE
- ▁OFFICE
- ▁INSTEAD
- ▁SILENT
- ▁SOUTH
- ▁AGO
- ▁CAMP
- ▁LOVED
- ▁PATH
- ▁LEARN
- ▁PLAN
- ▁GOVERNMENT
- OUR
- PPED
- ▁SITTING
- ▁SEAT
- TEN
- RESS
- SIDE
- ▁MOVED
- ▁DIE
- ▁RESULT
- ▁SPRING
- ▁PLEASE
- ▁RI
- ▁NATURAL
- ▁ANNE
- ▁STA
- ▁CORNER
- ▁WALL
- ▁IMPOSSIBLE
- ▁BROWN
- ▁SUIT
- ▁MUSIC
- PI
- ▁TRY
- ▁DIED
- ▁TEARS
- ▁JU
- ▁COMFORT
- ▁DANGER
- ▁MEASURE
- ▁PROPERTY
- ▁BORN
- CON
- ▁CR
- ▁BROKEN
- ▁MASS
- EVER
- IER
- ▁EXPRESS
- ▁POCKET
- ▁SCARCE
- ▁SELF
- NY
- ▁MADAME
- ▁LAUGHED
- ▁TOUCH
- ▁APPEAR
- ▁LONDON
- ▁SAFE
- ▁SHARP
- ▁ATTACK
- ▁JANE
- ▁COVERED
- ▁OUTSIDE
- ▁WHATEVER
- ▁PLACED
- ▁RACE
- ▁SHORE
- ▁LAID
- ▁ROMAN
- ▁PERSONAL
- UP
- AU
- ▁REMAINED
- ▁HAPPINESS
- ▁AFTERNOON
- ▁DISTANCE
- ▁STORM
- ▁MARRIED
- ▁FRANK
- ▁VALLEY
- ▁BOUND
- ▁TALKING
- ▁JO
- ▁QUICK
- ▁STEP
- AND
- ▁ARMY
- ▁EFFORT
- ▁FRENCH
- ▁V
- LEY
- ▁PARTICULAR
- ▁START
- ATING
- OO
- LU
- ▁TRANS
- ▁HAPPEN
- ▁HABIT
- ▁VILLAGE
- ▁BELOW
- ▁GENTLEMAN
- BLE
- ▁BILL
- ▁SAVE
- ACT
- ▁SOCIETY
- ▁MAJOR
- ▁QUARTER
- ▁SKY
- ▁GUESS
- CY
- ▁SAD
- ILE
- ▁SL
- ▁PLEASANT
- ▁STRAIGHT
- ▁STRENGTH
- ▁FORTUNE
- ▁WRONG
- ▁COMMAND
- ▁BOX
- ▁QUIET
- ISE
- ▁JA
- IBLE
- ▁TREAT
- ▁GLANCE
- ▁NECESSARY
- ▁FORGET
- ▁MOUNTAIN
- ▁WINTER
- ▁DREW
- ▁WAV
- ▁PLAIN
- ▁ENTIRELY
- ▁TEA
- ▁SOFT
- ▁QUICKLY
- ▁INFLUENCE
- ▁DINNER
- ▁FOOD
- ▁CHAPTER
- ▁YE
- ▁REACH
- ▁GETT
- ▁PAPER
- ▁GIVING
- ▁BEGINNING
- ▁SEND
- ▁FIGHT
- ▁SCENE
- ▁RUSH
- ▁PI
- ▁MARK
- ▁NA
- ▁BROKE
- ▁CLASS
- ▁BATTLE
- ▁EASY
- ▁GROUP
- BY
- ▁STOP
- ▁DIRECTION
- ▁BESIDE
- ▁MOR
- HAM
- UFF
- ▁WEST
- ▁OBLIG
- ▁COLOR
- ▁SINGLE
- ▁EASILY
- ▁PALE
- ▁ACTION
- ▁INTER
- ▁STRANGER
- ▁WI
- ▁CONVERSATION
- ▁BLOW
- ▁MARY
- ▁MU
- ▁TERRIBLE
- ▁THINKING
- ▁PULL
- ▁MOON
- AB
- ▁REP
- ▁ESPECIALLY
- ▁HEAVY
- ▁SICK
- ▁LUCK
- ▁TRAIN
- ▁GUN
- ▁GU
- ▁WAITING
- ▁TURNING
- ITIES
- ▁BREAD
- ▁BELONG
- ▁LOUD
- ▁REPORT
- ▁AMERICAN
- ▁JOURNEY
- ▁ANXIOUS
- ▁LIPS
- ▁KILLED
- IGHT
- GO
- ▁CONSIDER
- ▁PROBABLY
- ▁PALACE
- ▁HISTORY
- ▁LAKE
- ▁SHUT
- ▁SIMPLY
- WA
- ▁PAIN
- ▁HORSES
- ▁SEEING
- FULLY
- ▁EXPECTED
- ▁EVIL
- ▁BURN
- ▁SIMPLE
- ▁DIRECT
- IFIED
- HER
- ▁SLOWLY
- ▁LEG
- UGH
- ▁SAIL
- RIC
- ▁WISHED
- ▁RULE
- ▁LAD
- ▁MORAL
- ▁MOVE
- ▁FOLLOWING
- ▁SILVER
- ▁SEARCH
- ▁CHANGED
- ▁HANDSOME
- ▁COULDN
- ▁PASSION
- ▁HU
- ▁SMILED
- ▁STREAM
- ▁CONCERN
- ▁PRESENCE
- STER
- ▁CONTENT
- ▁BOARD
- ▁SHAPE
- ▁DECIDED
- ▁MARRY
- ▁PERFECT
- ▁STEPS
- ▁CLOSED
- ABLY
- DEN
- ▁WEAK
- ▁SUFFICIENT
- ▁SHADOW
- ▁EXPECT
- ▁SPOT
- ▁DUTY
- ▁SPEAKING
- ▁BESIDES
- ▁FIELD
- ▁ROLL
- ▁TRYING
- ▁EAR
- ▁VER
- ▁MARRIAGE
- ▁SHOT
- ▁SLAVE
- ▁MILL
- ▁NATION
- ▁NECK
- ▁ARRIVED
- ▁TALL
- ▁GRACE
- LIN
- ▁FORTY
- ▁BROAD
- ▁SUMMER
- ▁COUSIN
- ▁BEGIN
- ▁CATCH
- ▁FO
- ▁PE
- ▁MEANT
- ▁THIN
- IO
- ▁GROW
- ▁TRO
- ▁NOTICE
- ▁CRY
- ▁FISH
- ▁COM
- ▁DEGREE
- ▁HONOUR
- ▁UNDERSTOOD
- ▁SHOP
- ▁TRUST
- ▁CONDITION
- ▁FARM
- IZ
- ▁SUDDEN
- ▁SUCCESS
- ▁SURPRISE
- ORS
- ▁THOUGHTS
- UND
- ▁ALLOWED
- ITE
- ▁NARROW
- ▁GLASS
- ▁SERIOUS
- ▁STICK
- ▁GAME
- ▁SPENT
- ▁SELL
- ▁GRA
- ▁LOWER
- ▁RAISED
- ▁PIN
- ▁ALLOW
- ▁CALM
- FT
- ▁L
- ▁PU
- ▁FIT
- ACH
- ▁SUFFER
- ▁LEGS
- ▁SUPPORT
- ▁FRANCE
- ▁LATTER
- OV
- ▁TASTE
- ▁GATE
- ▁INSTANT
- ▁MINUTE
- ▁OFFER
- ▁GREATER
- ▁PORT
- ILL
- ▁INDIVIDUAL
- ▁AUNT
- ▁EAST
- ▁ADVANTAGE
- ▁FASHION
- ▁SWORD
- ▁TWELVE
- ▁HONOR
- ▁MOVEMENT
- ▁ISLAND
- ACK
- ▁WOODS
- NCH
- ▁PLEASED
- ▁ENEMY
- ▁RAIN
- ▁VARIOUS
- ▁OBSERVED
- ▁LADIES
- ▁BELIEVED
- ▁CAST
- ▁RISE
- ▁BALL
- ▁MONTHS
- ICE
- ▁MURDER
- ▁CONDUCT
- ▁SOCIAL
- ▁TENDER
- ▁LEARNED
- ▁FRA
- ▁FIRM
- CLOCK
- ▁PREVENT
- ▁RING
- LIE
- ▁GOLDEN
- ▁DECLARED
- ▁BUILDING
- ▁WRITE
- ▁ATTEND
- ▁CARRIAGE
- ▁SITUATION
- IDE
- ▁NOBLE
- ▁HUNG
- ▁RUNN
- ▁YELLOW
- ▁KNOWLEDGE
- ▁YORK
- ▁PUSH
- ▁LEAVING
- ▁POST
- ▁CIRCUMSTANCES
- ▁SEEK
- ▁FINALLY
- ▁MAIN
- ▁LETTERS
- ▁POL
- ▁ADD
- FE
- ▁ANCIENT
- ▁MARCH
- ▁WINE
- ▁STATES
- ▁WALLS
- ▁PRISONER
- ▁ISABEL
- ▁TEMPER
- ▁JUDGE
- ▁FAINT
- ▁POND
- ▁GRASS
- ▁FAM
- OUT
- ▁LAUGH
- ▁GRAY
- IGN
- ▁ESCAPE
- ▁KILL
- ▁PRAY
- ▁COMES
- ▁ABSOLUTE
- ▁BLIND
- ▁WIN
- ▁HOST
- ▁MERELY
- ▁RID
- ▁EVERYBODY
- ▁MATERIAL
- ▁STRETCH
- ▁DUE
- ▁ROW
- ▁TIN
- ▁PROMISE
- ▁LISTEN
- ▁WALKING
- ▁COMPANION
- ▁INDIAN
- ▁BREAK
- ▁BENEATH
- ▁RUIN
- ▁EDGE
- ▁WOR
- ▁FORMER
- ▁WORSE
- ▁EVIDENTLY
- ▁HARM
- ▁CENT
- ▁PIECE
- ▁LOT
- ▁PRESIDENT
- ▁SPECIAL
- ▁LABOR
- ▁HEALTH
- GA
- ▁PLACES
- ▁BEN
- ▁SOMEWHAT
- ▁DROPPED
- ▁AFFECTION
- ▁EXACTLY
- ▁DARKNESS
- ▁FALLEN
- ▁DRESSED
- ▁BILLY
- ▁ACCEPT
- ▁FL
- ▁HOT
- ▁REPEATED
- ▁MEETING
- PA
- ▁PERIOD
- ▁HONEST
- ▁INSTANCE
- ▁FLA
- ▁PASSAGE
- ▁NE
- ▁POSSESSION
- ▁WEAR
- ▁PEACE
- ▁COAT
- ▁HOUSES
- ▁MOUNTAINS
- ▁FIFTEEN
- ▁WELCOME
- ▁YARD
- ▁PROPER
- ▁MUS
- ADE
- ▁RECEIVE
- ▁SKIN
- ▁GROWN
- ▁AFTERWARDS
- ANG
- ▁DA
- ▁DIFFICULT
- ▁PERSONS
- ▁ACCORDING
- ▁FARMER
- ▁SPEECH
- ▁IMPORTANT
- PAR
- ▁PERFECTLY
- ▁MIN
- ▁CONSIDERED
- ▁NU
- ▁DEPEND
- ▁MORROW
- ▁MOUNT
- ▁KISS
- ▁LYING
- ▁SUFFERING
- ▁EXIST
- ERY
- OOK
- BA
- ▁PAINT
- AH
- ▁CAT
- ▁PURE
- ▁WISE
- ▁PRIVATE
- ▁REBECCA
- ▁VESSEL
- ▁CLEAN
- ▁GENTLEMEN
- ▁IRON
- ▁STORE
- ▁FUR
- ▁INDIANS
- ▁LOSE
- ▁BATH
- ▁NEWS
- ▁CHI
- ▁FA
- ▁CHARGE
- ▁PRIEST
- ▁WRITTEN
- ▁FORGOTTEN
- ▁TRAIL
- ▁CLOTHES
- ▁ALIVE
- ▁SUB
- ▁REPLY
- ▁THROW
- ▁AB
- ▁SOLDIERS
- ▁ISN
- ▁COTTAGE
- ▁COURAGE
- ▁CONTAIN
- ▁BUILT
- ▁PAID
- ▁HUNT
- ▁CASTLE
- HOOK
- ▁MERE
- GGED
- ▁NI
- ▁UNC
- ▁PREPARED
- ▁BARE
- ▁SMILING
- ▁SPREAD
- ▁WEATHER
- ▁EDWARD
- ▁GERMAN
- ▁CURIOUS
- ▁SERVANT
- ▁DISCOVERED
- ▁TRAVEL
- EY
- ▁DANCE
- ▁PEN
- BR
- GEN
- ▁BREAKFAST
- ▁CHAMBER
- ▁WILLIAM
- ▁TERROR
- ▁SPITE
- ▁TIRED
- ▁LOCK
- ▁CONSIDERABLE
- TLE
- ▁MANAG
- ▁DRY
- ▁FINISHED
- ▁MILLION
- ▁FRE
- ▁MIS
- ▁PASSING
- ▁DRAW
- ▁BON
- ▁VA
- ▁VEN
- ▁MAKES
- ▁VAIN
- ▁BOTTOM
- ▁DRINK
- ▁FUTURE
- ▁RACHEL
- ▁SORROW
- ▁SIXTEEN
- ▁KNIT
- ▁PROUD
- WI
- ▁TOBY
- ▁NOISE
- ▁SLIGHT
- ▁PROCEED
- ▁FER
- ▁COVER
- ▁DRAWING
- ▁FAVOR
- ▁CATHERINE
- ▁NEWSPAPER
- ▁NOBODY
- ▁ROOF
- ▁WEALTH
- ▁PROVE
- ▁DRAWN
- TTED
- OKE
- ▁DETERMINED
- ▁DOG
- ▁REMEMBERED
- ▁OPENING
- ▁FLOWERS
- ▁GENTLE
- ▁KNIGHT
- ▁RECOVER
- ▁DESERT
- ▁MOTION
- ▁NICE
- ▁INTENTION
- ▁GROWING
- ▁CLOUD
- ▁MONTH
- HOOD
- ▁POT
- UDE
- ▁PLANT
- ▁MAD
- ▁ENJOY
- ▁FAT
- ▁COR
- ▁KNOWING
- ▁IDEAS
- IZED
- ▁CHEEK
- ▁EUROPE
- ▁KNOCK
- ▁ALARM
- ▁TONGUE
- ▁SPACE
- ▁PATSY
- ▁MISTRESS
- ▁HENRY
- ▁JERRY
- ▁LIKED
- ▁PLAYED
- ▁BOOKS
- ▁MODER
- ▁CORN
- ▁ELIZABETH
- ▁CLUB
- ▁BRAIN
- ▁TROOP
- ▁COOK
- ▁DU
- ▁FUN
- DAY
- ▁QUA
- ▁FLOW
- ▁DARE
- ▁DELIGHT
- ▁WOUND
- ▁DESCEND
- ▁EVERYWHERE
- ▁FRIGHTENED
- ▁GEORGE
- ▁PECULIAR
- ▁MACHINE
- ▁PATIENT
- ▁MEADOW
- ▁PEASANT
- ▁BURST
- ▁ORDINAR
- ▁SONG
- ▁BRAVE
- ▁EXISTENCE
- ▁LUCY
- ▁J
- ▁CAREFULLY
- ▁PRESENTLY
- ▁GEN
- ▁COW
- LLY
- ▁PROMISED
- UOUS
- ▁LIFTED
- ▁MEANING
- ALL
- ▁FAIL
- NER
- ▁REGULAR
- ▁VIRTUE
- ▁STUDY
- ▁PROTECT
- ▁FOND
- ▁FANCY
- ▁STOCK
- ▁KEY
- ▁JUSTICE
- ▁PACK
- LET
- ▁AFFAIRS
- ▁DIFFICULTY
- ▁WORE
- ▁COST
- ▁HEAT
- ▁SHOULDER
- ▁OFFERED
- ▁MISTAKE
- ▁DOLLARS
- ▁LOOKS
- QUA
- ▁BREAST
- ▁PRINCIPLE
- ▁CHARLES
- ▁TEETH
- ▁OCCUPIED
- ▁DROP
- ▁PAPA
- ▁SHEEP
- ▁KNOWS
- ▁DECK
- ▁BORE
- ▁EXC
- ▁SURPRISED
- ▁STATION
- ▁PL
- ▁PR
- ▁OURSELVES
- ▁SYMPATHY
- ▁RUTH
- ▁EXCITED
- ▁CONTROL
- ▁ANGRY
- ▁IMAGINATION
- ▁WITNESS
- ▁HOLDING
- THER
- DA
- ▁TRADE
- ▁CREATURE
- ▁SISTERS
- ▁JOIN
- LAS
- ▁ALTOGETHER
- ▁CIVIL
- ▁EMPTY
- ▁LEAP
- ▁HURT
- ▁BOLD
- ▁TASK
- ▁POLICE
- ▁DRAGON
- ▁MAID
- ▁CLAIM
- ▁SHAME
- ▁PHYSICAL
- ▁CONC
- ▁SEIZED
- ▁OB
- ▁LIVES
- ▁HEIGHT
- ▁GI
- ▁PAL
- ▁CHARMING
- ▁FEELINGS
- ▁SERVANTS
- ▁DELIVER
- ▁FRUIT
- ▁SATISFIED
- ▁STRUGGLE
- ▁WROTE
- ▁CONCEAL
- ▁MOVING
- ▁FLASH
- ▁OPPOSITE
- ▁HURRY
- ▁ROUGH
- ▁PRICE
- ▁AWFUL
- ▁SAND
- ▁SLIPP
- ▁SHOWN
- ▁SPRA
- ▁AGREED
- ▁FIXED
- ▁PERCEIVED
- ▁UPPER
- ▁FINGER
- ▁FINGERS
- ▁EAGER
- LF
- ▁EARS
- LIGHT
- ▁IMAGINE
- ▁LIKELY
- ▁COAST
- ▁UNITED
- ▁VAN
- ▁EXPLAINED
- ▁TELLING
- ▁DANGEROUS
- ▁DICK
- ▁COOL
- ▁CAL
- ▁INSIST
- BI
- ▁SECURE
- ▁HILLS
- ▁SAN
- ▁CHEER
- ▁FILL
- ▁BUY
- ZA
- HI
- ▁CLOTH
- ▁POSSESSED
- ▁ADVANCE
- ▁METHOD
- ATIVE
- ▁GREATLY
- ▁SMOKE
- ▁HIGHER
- ▁COMPANIONS
- ▁ANIMALS
- ▁GALL
- ▁QUIETLY
- ▁TRAVELL
- ▁RESOLVED
- ▁FLEW
- ▁CARLYLE
- ▁MEMORY
- ▁RESIST
- ▁GRAHAM
- ▁LAUGHING
- ▁FAITH
- ▁BIRD
- CRI
- ▁LEAVES
- ▁AMERICA
- ▁DEMAND
- BOARD
- ▁AWAKE
- ▁CURIOSITY
- ▁LANGUAGE
- ▁VIOLENT
- ▁AWARE
- ▁DOUBLE
- ▁LOOSE
- LIKE
- ▁ADAM
- ▁RISING
- ▁HOTEL
- ▁BAND
- ▁ENGAGED
- ▁HEADS
- ▁LOG
- ▁FORMED
- ▁WINDOWS
- ▁PREFER
- RUS
- ▁THROWN
- ▁ARCH
- ▁PAUSE
- ▁SERVE
- KIN
- ▁FALLING
- ▁VO
- ▁WHISPERED
- ▁POWERFUL
- ▁ER
- ▁DEPART
- ▁CRUEL
- ▁EXAMPLE
- ▁SMOOTH
- ▁INTRODUC
- ▁RELIGION
- ▁SEVENTEEN
- ▁ABSENCE
- ▁PRINT
- ▁SHINING
- ▁ICE
- ▁POET
- ▁DREADFUL
- ▁REQUIRED
- ▁ORIGINAL
- ▁POINTED
- ▁INSIDE
- ▁BROTHERS
- ▁PRODUCED
- ▁SPOKEN
- ▁CREATURES
- ▁FLY
- ▁TOM
- ▁PURSU
- ▁SYSTEM
- ▁EXCELLENT
- ▁EXCITEMENT
- ▁MIDDLE
- ▁FALSE
- ▁REGRET
- ▁RAY
- ▁PHYSICIAN
- ▁COP
- ▁VALUE
- ▁TOUCHED
- ▁FLAT
- ▁OAK
- ▁SUM
- ▁LOSS
- ▁PAPERS
- ▁STEPP
- ▁REVER
- ▁SHADE
- SOME
- ▁LISTENED
- ▁N
- ▁DISCOVER
- ▁BITTER
- TERN
- ▁HOLE
- ▁ADVANCED
- ▁PICK
- ARTAGNAN
- ▁CORPORAL
- ▁ASLEEP
- ▁TEMPLE
- ▁INDICAT
- IUM
- ▁FARTHER
- ▁EXCUSE
- ▁FLU
- ▁NOSE
- ▁SIXTY
- ▁SUPPOSED
- ▁PROVED
- ▁RATE
- ▁SHOULDERS
- ▁AFFAIR
- ▁FIELDS
- ▁REMARKED
- AVE
- ▁WEEKS
- ▁ESTABLISH
- ▁PARIS
- ▁ADMIT
- ▁NEIGHBOR
- ▁ATTRACT
- ▁CUSTOM
- ▁DISTINGUISH
- ▁SURFACE
- ▁COUPLE
- ▁DEVIL
- ▁LIMIT
- ▁ROYAL
- ▁FOOL
- ▁RARE
- ▁PRIDE
- ▁PROFESSOR
- ▁SAKE
- ▁DALE
- ▁VAST
- ▁REFUSED
- ▁FAILED
- ▁BAG
- ▁ROB
- ▁WASH
- ▁FAIRY
- ▁FREQUENT
- ▁MARILLA
- ▁PROGRESS
- ▁RELIEF
- ▁DROVE
- ▁DOZEN
- ▁AHEAD
- ▁ADVENTURE
- ▁GRANT
- ▁PRIM
- ▁MENTAL
- ▁PAIR
- ▁IMPRESSION
- ▁WOUNDED
- ▁FULLY
- ▁DISAPPEARED
- ▁MILE
- ▁DRIVE
- ▁MUD
- ▁SIZE
- ▁ANIMAL
- ZE
- ▁GRE
- ▁REPRESENT
- ▁ACQUAINTANCE
- ▁INSTRUMENT
- ▁SPLENDID
- ▁UNKNOWN
- ▁CORONEL
- ▁EMPEROR
- ▁EARNEST
- ▁EXTEND
- ▁BRIEF
- ▁RENDER
- ▁PARENTS
- ▁GENTLY
- ▁CALLING
- ▁TRIBE
- ▁CHRISTIAN
- ▁INTERESTING
- ▁LAMP
- ▁JIMM
- ▁DIV
- ▁LOVER
- UCH
- ▁HID
- ▁NEEDED
- ▁ORDERED
- ▁MEAL
- ▁SLOW
- ▁DAM
- ▁CLOUDS
- ▁DAN
- ▁GAR
- ▁EXPLAIN
- ▁QUI
- ▁CLIMB
- ▁HURRIED
- ▁MURMUR
- ▁SWIFT
- ▁ARTHUR
- ▁JEFF
- ▁KINGDOM
- ▁MESSAGE
- ▁PROTEST
- ▁ORGAN
- ▁RISK
- ▁FORGIVE
- ▁OCCURRED
- ▁PEARL
- ▁ODD
- ▁INFORMATION
- ▁BUSY
- ▁TRI
- ▁LACK
- ▁BAY
- ▁FLEET
- ▁CROWN
- ▁WAITED
- ▁BIRDS
- ▁PITY
- ▁SUCCEEDED
- ▁INFORMED
- ▁WISHES
- ▁DIRECTLY
- ▁CABIN
- ▁AUGUST
- ▁COUNTENANCE
- ▁HORROR
- ▁PHILIP
- ▁POPULAR
- ▁PREVIOUS
- ▁CONTRARY
- ▁ARTICLE
- ▁DIFFERENCE
- ▁HIDDEN
- ▁HUGE
- ▁AUTHORITY
- ▁POUND
- ▁JUMP
- ▁SPI
- ▁SHAKE
- ▁EVENTS
- ▁FRO
- ▁LEAN
- ▁CRO
- ▁TRIM
- ▁SHARE
- ▁FISHER
- ▁SETTLED
- ▁QUESTIONS
- ▁SI
- ▁VAL
- ▁APPROACHED
- ▁SUGGESTED
- ▁CONTINU
- ▁PERFORM
- ▁ACKNOWLEDG
- ▁CLIFF
- ▁COLONEL
- ▁GHOST
- ▁MAJESTY
- ▁EMOTION
- ▁SUPPER
- ▁DISTANT
- ▁INTERESTED
- ▁JACK
- ▁HUM
- ▁TRAMP
- ▁BRI
- ▁POUR
- ▁SHIPS
- ▁CHAIN
- ▁DY
- ▁RANK
- ▁MATTERS
- ▁LOVELY
- AW
- ▁PAT
- ▁WORKING
- ▁CONSEIL
- ▁EVIDENCE
- ▁MERCHANT
- ▁SOLEMN
- ▁CONSTANT
- ▁MINISTER
- ▁OFFICIAL
- ▁SENTIMENT
- ▁CENTURY
- ▁DELAY
- ▁JAMES
- ▁MATCH
- ▁FOREIGN
- ▁AROSE
- ▁BEAST
- ▁BAB
- ▁WIT
- ▁REMARKABLE
- ▁THOR
- ▁COMPAR
- ▁MAL
- ▁NEARER
- ▁FOURTH
- ▁GREY
- ▁MENTION
- ▁RUBB
- ▁CHARM
- ▁BARON
- ▁DESIRED
- SCAR
- ▁HOPED
- ▁TEACHER
- ▁MON
- ITCH
- BEL
- ▁PARTS
- ▁EIGHTY
- LAC
- GGING
- ▁REFLECT
- ▁COLLECT
- ▁BULL
- ▁CONSCIOUS
- ▁MOMENTS
- ▁DISTURB
- ▁COLLEGE
- ▁EGGS
- ▁STUPID
- ▁YESTERDAY
- ▁EXAMINE
- ▁FAULT
- ▁DEPTH
- ▁ROOT
- ▁MOUSE
- ▁SOUGHT
- ▁TURTLE
- ▁NATIVE
- ▁CRACK
- ▁SOLD
- ▁INVIT
- ▁PICKED
- ▁CEASED
- ▁HEARING
- ▁MIDS
- ▁PLAYING
- ▁STAGE
- ▁UNTO
- ▁GAIN
- ▁MIST
- ▁ORDERS
- ▁KNEES
- ▁TALE
- ▁DISTINCT
- ▁BENT
- ▁DESPAIR
- ▁TRIUMPH
- ▁SQUARE
- ▁THROAT
- ▁BOUGHT
- ▁PERMIT
- ▁SPEND
- ▁TRIP
- ▁THREATEN
- ▁ROME
- INESS
- ▁EXPOS
- GON
- ▁WRITING
- ▁INCREASED
- ▁PORTION
- ▁TENT
- IUS
- ▁YO
- ▁INTENDED
- ▁NAMED
- RATION
- ▁NOTIC
- ▁PIPE
- ▁WILLING
- ▁INSTANTLY
- ▁SERVED
- ▁BAL
- ▁POSSESS
- ▁CRE
- ▁ADMIRATION
- ▁LIBERTY
- ▁OPPORTUNITY
- ▁SELDOM
- ▁BIRTH
- ▁GLOW
- ▁INCLUD
- ▁REQUEST
- ▁TYPE
- ▁SLEPT
- ▁CRIME
- ▁MOTIVE
- ▁ELSIE
- ▁BEGUN
- ▁CONSENT
- ▁ADMITTED
- ▁AVOID
- ▁ADDRESS
- ▁HATE
- ▁DEMANDED
- ▁APPARENTLY
- ▁SUGGESTION
- ▁CONSIDERATION
- ▁BLESS
- ▁PROCEEDED
- NCY
- ▁PRISON
- ▁CONT
- ▁SHOUTED
- ▁FACES
- ▁SPIRITS
- ▁DEVELOP
- ▁ACCIDENT
- ▁ADVICE
- ▁INNOCENT
- ▁INSTINCT
- ▁UNCONSCIOUS
- ▁MYSTERIOUS
- ▁PRETEND
- ▁PEEP
- ▁ANYONE
- ▁DUKE
- ▁PLUM
- VILLE
- ▁SEVERE
- ▁ALAS
- ▁DELIGHTED
- ▁ISSUE
- ▁ASKING
- ▁CROW
- ▁ACCEPTED
- ▁RIDE
- ▁DOORS
- ▁TAR
- ▁PREPAR
- ▁SUGGEST
- WOOD
- ▁CITIZEN
- ▁ENTRANCE
- ▁LINCOLN
- ▁POLITICAL
- ▁PRACTICAL
- ▁STIFF
- ▁WIDOW
- ▁CAPITAL
- ▁CLEVER
- ▁MAMMA
- ▁CREDIT
- ▁OBEY
- ▁STRING
- ▁DAILY
- ▁ARGUMENT
- ▁HEAP
- ▁APARTMENT
- ▁FLIGHT
- ▁ELDER
- ▁PUR
- ▁PAGE
- ▁DUST
- ▁GAZE
- ▁NATIONAL
- ▁BABY
- DDING
- ISTS
- ▁TEACH
- ▁STREETS
- CAL
- ▁GE
- AFF
- ▁GOES
- ▁POSSIBL
- UNG
- ▁LINES
- GUE
- ▁VOTE
- ▁HUNTING
- ▁QUO
- ▁RESEMBL
- ▁BASKET
- ▁CIRCLE
- ▁CONSEQUENCE
- ▁KITCHEN
- ▁TREASURE
- ▁NEVERTHELESS
- ▁FANCI
- ▁ASSEMBL
- ▁GRIEF
- ▁VEIL
- ▁SEASON
- ▁INVENT
- ▁VIRGINIA
- ▁HUT
- ▁GUEST
- ▁ROAR
- ▁BEHOLD
- ▁VICTORY
- ▁CAPABLE
- ▁DULL
- ▁SHOE
- ▁FLOAT
- ▁MERRY
- ▁IMMEDIATE
- ETH
- ▁ELEANOR
- ▁EXPLANATION
- ▁PARLIAMENT
- ▁PRINCIPAL
- ▁PROPORTION
- ▁RESOLUTION
- ▁UNUSUAL
- ▁BLUFF
- ▁NINETEEN
- ▁SENSATION
- ▁VISIBLE
- ▁INCOME
- ▁FATE
- ▁SUPER
- ▁LAUGHTER
- ▁EASE
- ▁LOAD
- ▁JEW
- ▁ZE
- ▁FEVER
- ▁WEDDING
- ▁JOINED
- ▁TRACE
- ▁LEADER
- ▁CLEARLY
- ▁FLOWER
- ▁TERMS
- ▁EMPLOYED
- OCK
- ▁PARTICULARLY
- ▁MEMBERS
- ▁CONFESS
- ▁GRO
- ▁ADDRESSED
- ▁CHRIST
- ▁ACCOMPANI
- ▁AFFORD
- ▁AMOUNT
- ▁BRILLIANT
- ▁COMMUNICAT
- ▁FIERCE
- ▁RECORD
- ▁SACRIFICE
- ▁TEMPT
- ▁CORDIAL
- ▁COLOUR
- ▁PROOF
- ▁ESTATE
- ▁PARDON
- ▁ADVIS
- ▁ATTITUDE
- ▁IMPORTANCE
- ▁BOOT
- ▁SHOCK
- ▁FIR
- ▁PLENT
- ▁HIT
- ▁MEMBER
- ▁SUR
- ▁SEATED
- ▁MAG
- AVING
- ▁FAVOUR
- ▁REMARK
- ▁DIM
- ▁FAITHFUL
- ▁SAVED
- CHI
- ▁SIN
- THE
- ▁CONFIDENCE
- ▁EXTRAORDINARY
- ▁FORTUNATE
- ▁MISFORTUNE
- ▁PATIENCE
- ▁RELIGIOUS
- ▁SATISFACTION
- ▁POSITIVE
- ▁SIMILAR
- ▁EXCHANG
- ▁RETREAT
- ▁FLESH
- ▁ADMIRE
- ▁SPIRITUAL
- ▁DAWN
- ▁BURIED
- ▁URGE
- ▁SUNDAY
- ▁FOX
- ▁EMMA
- ▁NURSE
- ▁SNAPP
- ▁PARK
- ▁OBTAIN
- ▁RECOGNIZED
- ▁SPEED
- ▁MAGIC
- ▁LAWS
- ▁REMOVED
- ▁HAM
- ▁PRESERV
- ▁AID
- HOUSE
- ▁MENTIONED
- ▁CONSCIENCE
- ▁CONTEMPT
- ▁DETAIL
- ▁IMMENSE
- ▁NERVOUS
- ▁PRISCILLA
- ▁UNFORTUNATE
- ▁UNHAPPY
- ▁COMPLAIN
- ▁TWICE
- ▁WHISTL
- ▁SNAKE
- ▁WASHINGTON
- ▁PIRATE
- ▁WICKED
- ▁BODIES
- ▁DESIGN
- ▁JASON
- ▁VAGUE
- ▁CONSIST
- ▁GIFT
- ▁ANGEL
- ▁RODE
- ▁FOLD
- ▁BRIDE
- ▁ANGER
- ▁BASE
- ITUDE
- ▁CONCLUDED
- ▁ALTER
- ▁FRI
- ▁PANT
- ▁BID
- ▁HIGHEST
- ▁SAILOR
- MPLE
- ▁OBSERV
- ▁CHEERFUL
- IFICATION
- RID
- ▁DESCRIBED
- ▁BIN
- ▁JEWEL
- ▁ARTIST
- ▁PEER
- ▁NORA
- ▁SKI
- ▁DIAMOND
- ▁ENCOURAGE
- ▁PRIVILEGE
- ▁PROJECT
- ▁ANYBODY
- ▁ENCOUNTER
- ▁HOLLOW
- ▁YIELD
- ▁BOBBY
- ▁SAVAGE
- ▁SOMEBODY
- ▁OTHERWISE
- ▁PRAISE
- ▁PROBLEM
- ▁DISTRESS
- ▁UGLY
- ▁WARRIOR
- ▁MOURN
- ▁RELIEV
- ▁DESK
- ▁FOOLISH
- ▁STARTLED
- ▁SKILL
- SHONE
- ▁LONE
- ▁OBSERVATION
- ▁DENI
- ▁NEST
- ▁SOLDIER
- ▁RELATION
- ▁TRULY
- ▁VISITOR
- ▁OFFICERS
- ERSON
- ▁YA
- ▁EVIDENT
- ▁DREAMS
- ▁KEEPING
- ▁PLAINLY
- ▁DRUNK
- ▁EMBRAC
- ▁INTELLIGENCE
- ▁LIEUTENANT
- ▁PERSUADE
- ▁SURROUNDING
- ▁UNIVERSAL
- ▁GLEAM
- ▁SUPERIOR
- ▁WHEEL
- ▁JEALOUS
- ▁QUEER
- ▁PIERRE
- ▁MILK
- ▁RAIL
- ▁FLUSH
- ▁STAIRS
- ▁JESUS
- ▁HORN
- ▁REGION
- ▁SAFETY
- ▁KA
- ▁GUIDE
- ▁CAKE
- ▁CUP
- ▁INQUIRED
- ▁DEFI
- ▁LESSON
- ▁WRETCHED
- ▁PACE
- ▁TEST
- ▁READING
- ▁ENTIRE
- ▁NET
- ▁DOGS
- ▁COMMANDER
- ▁PRODUCE
- ▁GAINED
- ▁ARRIVAL
- ▁FAMILIAR
- ▁MEANWHILE
- ▁SUSPICION
- ▁CHOICE
- ▁IMPULSE
- ▁THRUST
- ▁PROCESS
- ▁SUMMON
- ▁SHEPHERD
- ▁HASTILY
- ▁GRASP
- ▁COUNTESS
- ▁STYLE
- ▁DWELL
- ▁MERIT
- ▁PITCH
- ▁HUNGRY
- ▁SPORT
- ▁LOUISE
- ▁STERN
- ▁PROVIDED
- ▁ASSUME
- ▁EARLIE
- ▁RAGE
- ▁U
- ▁RAPIDLY
- PORT
- ▁SUCCESSFUL
- ▁FLED
- ▁AGREE
- ▁CONDITIONS
- ▁RELATIONS
- ▁DREAD
- ▁NATURALLY
- ▁EARL
- ▁GAY
- ▁HYPNOTI
- ▁PUTT
- ▁GAZ
- ▁JIM
- ▁PAUS
- ▁PROPOS
- ▁ADMINISTRATION
- ▁ELEVEN
- ▁HOSPITAL
- ▁MAGISTRATE
- ▁STRIKE
- ▁DIGNITY
- ▁GLORY
- ▁BOTTLE
- ▁THRONE
- ▁RECKON
- ▁COSETTE
- ▁MOREOVER
- ▁APPLI
- ▁HIND
- ▁PRODUCT
- ▁POOL
- ▁TRIAL
- HAN
- ▁ERIC
- ▁CUB
- ▁PIECES
- ▁EXCEPTION
- ▁ENJOYED
- ▁DARED
- ▁TRU
- ▁CLOSELY
- ▁RAPID
- ▁AFFECTED
- ▁REQUIRE
- ▁SOFTLY
- ▁BROW
- UCK
- ▁MARKED
- ▁SEVENT
- ▁ELECT
- ▁FORGOT
- ▁CORRECT
- ▁FRANCS
- ▁MARGUERITE
- ▁SCIENCE
- ▁UNEXPECTED
- ▁FOUGHT
- ▁MILITA
- ▁THUNDER
- ▁VOYAGE
- ▁GANEM
- ▁FREEDOM
- ▁NODDED
- ▁CAPTURE
- ▁MORTAL
- ▁OWNER
- ▁POLITE
- ▁VISION
- ▁EDUCATION
- ▁GOVERNOR
- ▁RAV
- ▁REWARD
- ▁HASTE
- ▁REPEAT
- ▁DETERMIN
- ▁PITI
- ▁KNEE
- LINE
- ▁DEVOTED
- ▁INTERRUPTED
- ▁FOLKS
- ▁EXTREME
- ▁APPROACH
- ▁CONTINUE
- ▁BEARING
- ▁CHAP
- ▁ACQUAINTED
- ▁GLIMPSE
- ▁GRADUALLY
- ▁SUNSHINE
- ▁PRACTICE
- ▁SUPPLI
- ▁DAVID
- ▁DRIFT
- ▁SHOWING
- ▁LEVEL
- ▁PROMPT
- ▁QUARREL
- ▁REPRESENTATIVE
- ▁PLUNG
- ▁GIANT
- FALL
- ▁STOUT
- CHA
- WEPT
- ▁GLANC
- ▁SALT
- ▁CHOSEN
- ▁BUCK
- ▁REALIZED
- ▁REALITY
- ▁TUR
- ▁DRIVEN
- ▁CARD
- ▁PRAYER
- ▁TERM
- AID
- ▁HOLY
- ▁ENDURE
- ▁RANGE
- ▁HANG
- ▁SAM
- LAN
- ▁CAVE
- INA
- ▁GRI
- ▁SIGH
- ▁NEIGHBOUR
- ▁COUNCIL
- ▁EXERCISE
- ▁NAUTILUS
- ▁SOMEWHERE
- ▁SYLVIA
- ▁THOROUGH
- ▁VICTIM
- ▁BRIDGE
- ▁COMPELLED
- ▁INCLINED
- ▁OVERCOME
- ▁RESERVE
- ▁ARREST
- ▁PRECIOUS
- ▁DUTCH
- ▁OCEAN
- ▁ACQUIR
- ▁RECALL
- ▁DESTIN
- ▁ATTACH
- ▁SLIM
- ▁WEEP
- ▁CONSCIOUSNESS
- ▁TIGHT
- ▁WAKE
- ▁COMFORTABLE
- ▁ACTIVE
- ▁WINGS
- ▁GRIN
- ▁AFFECT
- ▁WHIT
- ▁IDEAL
- ▁EASTER
- ▁APPROACHING
- ▁CREATED
- ▁PLANS
- ▁INCREASE
- ▁FLYING
- ▁SHOUT
- OES
- MISSION
- ▁ARMED
- ABILITY
- ▁BLUSH
- ▁CONNECTION
- ▁MATTHEW
- ▁MEDICINE
- ▁REMIND
- ▁EXHIBIT
- ▁BLOCK
- ▁DESERVE
- ▁LISTENING
- ▁TITLE
- ▁FLOUR
- ▁FLAME
- ▁AGENT
- ▁USEFUL
- ▁BRIG
- ▁BOIL
- ▁ASSURED
- ▁REFLECTION
- ▁PINE
- ▁WAG
- ▁YOUNGER
- ▁BEARD
- ▁KINDNESS
- CTUALLY
- ▁ACTUAL
- ▁WEIGHT
- ▁LILY
- ▁IMPRESS
- ▁DESCRIBE
- ▁BEHELD
- ▁COMMUNITY
- ▁DESPERATE
- ▁DISPLAY
- ▁ENEMIES
- ▁MELANCHOLY
- ▁MIRROR
- ▁RECOMMEND
- ▁SPANISH
- ▁BLAME
- ▁VOLUME
- ▁SHOOT
- ▁COMBIN
- ▁SHAKING
- ▁SOUTHERN
- ▁MYSTERY
- ▁EVERYONE
- ▁COMMISSION
- ▁COMPOSED
- ▁UDO
- ▁IMAGE
- ▁DECEIV
- ▁FAILURE
- ▁PATTY
- ▁ALICE
- ▁FRAME
- ▁MODEST
- ▁MAGNIFICENT
- ▁BRANCHES
- ▁REIGN
- ▁RAG
- ▁PARISH
- ▁KATE
- ▁AMID
- ▁SLEEPING
- ▁ANNOUNCED
- ▁EAGERLY
- ▁WIRE
- ▁LAP
- ▁ARAB
- ▁EATING
- ▁RUM
- ▁CAREFUL
- ▁DISCUSS
- WORTH
- ▁DISTRICT
- ▁FOREHEAD
- ▁FRANCIS
- ▁INCIDENT
- ▁APPEAL
- ▁EMBARRASS
- ▁MAINTAIN
- ▁PRONOUNC
- ▁FURNISH
- ▁STRAIN
- ▁ELEMENT
- ▁SILK
- ▁FEAST
- ▁RECENT
- ▁DANCING
- ▁LODGE
- ▁ASHAMED
- ▁TRICK
- ▁BOBO
- ▁STUFF
- ▁ET
- ▁ASSERT
- ▁SANK
- ▁TREATMENT
- ECI
- ▁SWIM
- ▁BECOMING
- ▁SINGING
- ▁PLATE
- ▁SCATTERED
- ▁EXTREMELY
- ▁GRIM
- ▁SANG
- ▁FIGHTING
- ▁FACTOR
- ▁PAINFUL
- ▁HIDE
- ▁FUNN
- ▁AFTERWARD
- ▁FROG
- ▁VENTURE
- ▁DISAPPOINT
- ▁COMRADE
- ▁MONSIEUR
- ▁OBVIOUS
- ▁PASSENGER
- ▁PROFOUND
- ▁PUBLISH
- ▁ACCUSTOM
- ▁BLOOM
- ▁SMITH
- ▁RELATIVE
- ▁ACCUSE
- ▁MANIFEST
- ▁SOLID
- ▁MONSTER
- ▁MARIUS
- ▁CANDLE
- ▁PROCUR
- ▁INTERFERE
- ▁HOUSEHOLD
- ▁DEVELOPMENT
- ▁AGREEABLE
- ▁HALT
- ▁NECESSITY
- FOLD
- ▁CITIES
- ▁REGI
- ▁GLOOMY
- BBL
- ▁SEPARATED
- ▁CHEST
- ▁STRIP
- ▁SPAR
- ▁DUN
- ▁SETTLE
- ▁STARED
- ▁HANGING
- ▁FEATURES
- ▁PILE
- ▁ORIGIN
- ARIES
- ▁LION
- ▁ALI
- ▁ASTONISHMENT
- ▁COMPLIMENT
- ▁DELICATE
- ▁COUNSEL
- ▁FIFTH
- ▁SUPPRESS
- ▁BURDEN
- ▁COMPLEX
- ▁ADDITION
- ▁CRUSH
- ▁TWIST
- ▁PIANO
- ▁BRUSH
- ▁CHECK
- ▁ANNIE
- ▁SHELTER
- ▁IMPROV
- ▁WESTERN
- ▁LOCAL
- ▁APPLE
- ▁GREET
- ▁MASK
- ▁RUSSIAN
- ▁TOWER
- ▁CREW
- ▁TIP
- ▁WANDERING
- ▁READER
- ▁WANDERED
- ▁DESTROY
- ▁OBSERVE
- MORE
- ▁ESCAPED
- ▁PET
- ▁BUILD
- ▁REAR
- ▁DESTROYED
- HIN
- ▁OWE
- ▁RANG
- ▁TEAR
- ▁NED
- ▁OFFICER
- ▁TRAP
- ▁OCCUR
- ▁APPOINTED
- ▁ATMOSPHERE
- ▁CHOOSE
- ▁CONCLUSION
- ▁CULTIVAT
- ▁DESCRIPTION
- ▁ENORMOUS
- ▁EXHAUSTED
- ▁LANDSCAPE
- ▁NATASHA
- ▁PROSPECT
- ▁REFRESH
- ▁SPECIES
- ▁SURROUNDED
- ▁WEAPON
- ▁BLANK
- ▁DEFEND
- ▁EDITH
- ▁HORRIBL
- ▁BETRAY
- ▁FERKO
- ▁LABOUR
- ▁NEGRO
- ▁RESUMED
- ▁LEAF
- ▁MUSKET
- ▁INTENSE
- ▁MERCY
- ▁ADOPT
- ▁SCORE
- ▁DASH
- ▁LAWYER
- ▁SLOPE
- ▁CHUCK
- ▁ASSISTANCE
- ▁BROOK
- ▁BREAKING
- ▁ASSIST
- ▁GROAN
- ▁HELEN
- ▁BEHAV
- ▁MAIDEN
- ▁CRIS
- ▁SHOUTING
- ▁NAY
- ▁PIG
- ▁ACCORDINGLY
- ETTE
- ▁DESIR
- ▁RUB
- ▁GRU
- ▁PIT
- ▁HEAVI
- ▁OBTAINED
- ▁SPARE
- ▁BRANCH
- ▁COUNTER
- ▁APART
- ▁AMBITION
- ▁ASTONISHED
- ▁CORRESPOND
- ▁DRIVING
- ▁ENERGY
- ▁HISTORIAN
- ▁REVOLUTION
- ▁SWEEP
- ▁TREMBLING
- ▁CRAFT
- ▁FAMILIES
- ▁LITERATURE
- SBURG
- ▁FEMALE
- ▁TILNEY
- ▁GENEROUS
- ▁SUBMIT
- ▁INTELLECTUAL
- ▁ORCHARD
- ▁STORIES
- ▁DIANA
- ▁VEIN
- ▁TRIFL
- ▁TWIN
- ▁WORSHIP
- ▁MARBLE
- ▁GALLANT
- ▁SENSIBLE
- ▁NEAT
- ▁BROWNIE
- ▁JUNE
- ▁SHAW
- ▁WORST
- ▁USELESS
- ▁FISHING
- ▁CRYING
- ▁MAYBE
- ▁VARI
- ▁PRESERVE
- ▁VOL
- ▁EMPLOY
- ▁INTERRUPT
- ▁SLIGHTLY
- ▁ACCOMPLISHED
- NEY
- ▁STEAM
- ▁BALANC
- ▁LEANING
- ▁SIGHED
- ▁REFUSE
- ▁IMAGINED
- ▁DATE
- GROUND
- ▁ENTERTAIN
- ▁PERCEIVE
- ▁ABROAD
- ▁CHEESE
- ▁DESTRUCTION
- ▁ESSENTIAL
- ▁EXPEDITION
- ▁GRANDFATHER
- ▁INFINITE
- ▁LIBRARY
- ▁MULTITUDE
- ▁NEGLECT
- ▁SWALLOW
- ▁VILLEFORT
- ▁BELOVED
- ▁COMMITTEE
- ▁CONFIDENT
- ▁PURPLE
- ▁PURCHAS
- ▁SCRAP
- ▁SPOIL
- ▁LIKEWISE
- ▁EXTRA
- ▁STRAW
- ▁SALUT
- ▁SOURCE
- ▁HASTENED
- ▁RESENT
- ▁FLOCK
- ▁LOFT
- ▁FLO
- ▁CLO
- ▁CONVINCED
- ▁GOODNESS
- ▁HYPNOTIZ
- ▁SETTING
- ▁HAIL
- ▁PHI
- ▁GROVE
- ▁DISCOVERY
- ▁DAMP
- ▁WHISPER
- ▁LIFT
- ▁HOP
- ▁SUSPECTED
- ▁SCR
- OLI
- ▁FAC
- ▁BUSH
- ▁FOREVER
- ▁BARRICADE
- ▁CONSTITUTION
- ▁ENDEAVOR
- ▁ENTHUSIASM
- ▁EXECUTION
- ▁HYACINTH
- ▁PERCEVAL
- ▁PSYCHE
- ▁REPROACH
- ▁THIRTEEN
- ▁ABSORB
- ▁GRATITUDE
- ▁MERCER
- ▁REPUTATION
- ▁SCREAM
- ▁PUPIL
- ▁RETIRED
- ▁STEEP
- ▁SUMMIT
- ▁MISERABLE
- ▁STRICT
- ▁MINGLED
- ▁DEFEAT
- ▁REVEAL
- ▁LOVING
- ▁GOOSE
- ▁ECHO
- ▁AWAIT
- ▁MOOD
- ▁CRAWLEY
- ▁CELL
- ▁ENGAGEMENT
- ▁PRECED
- ▁SOMEONE
- ▁ARRANGEMENT
- ▁PICKET
- ▁GASP
- ▁HUMOR
- ▁INVITATION
- ▁JOB
- WITHSTAND
- ▁LAMENT
- ▁CLASSES
- ▁HUNGER
- ▁DISPOSED
- ▁STEAMER
- ▁FEARFUL
- ▁GER
- ▁FINAL
- ▁FLAG
- ▁JULY
- ▁DIG
- WORK
- ▁OPPOS
- ▁ANXIETY
- ▁AUDIENCE
- ▁BACHELOR
- ▁COLUMN
- ▁HANDKERCHIEF
- ▁IMPATIENT
- ▁JUDGMENT
- ▁KNIFE
- ▁SOVEREIGN
- ▁STRIKING
- ▁THOMPSON
- ▁EMPIRE
- ▁FULFIL
- ▁CONSULT
- ▁JENNY
- ▁THENARDIER
- ▁POYSER
- ▁FOURTEEN
- ▁JAPANESE
- ▁INDULG
- ▁MARTIAN
- ▁COUNTRIES
- ▁FETCH
- ▁CRITIC
- ▁ROBBER
- ▁CROOK
- ▁DEPARTURE
- ▁MABEL
- ▁PREACH
- ESCENT
- ▁WHIP
- ▁NAIL
- ▁DELIGHTFUL
- ▁DISCUSSION
- ▁SENTENCE
- ▁LANE
- ▁ENGINEER
- ▁ARRANGED
- MMY
- ▁LEST
- ▁RENT
- MMED
- ▁LIST
- ▁ROBE
- ▁MISSION
- ▁GRACEFUL
- ▁LIGHTN
- STONE
- COURT
- ▁CONCEPTION
- ▁CONTRACT
- ▁DROWN
- ▁EXPERIMENT
- ▁HITHERTO
- ▁PLAGUE
- ▁PORTHOS
- ▁SHRIEK
- ▁DETECT
- ▁ACCENT
- ▁ERECT
- ▁SAZEN
- ▁PROFIT
- ▁VIVID
- ▁SQUIRE
- ▁OPERATION
- ▁SMELL
- ▁SIMON
- ▁EXTENT
- ▁KEEN
- ▁EMERG
- ▁REVIV
- ▁REGIMENT
- ▁DISAPPOINTMENT
- ▁STOLE
- ▁DIVINE
- ▁GUILTY
- ▁COWARD
- ▁EXPECTATION
- ▁SIGNOR
- ▁MODE
- ▁CENTRE
- ▁FIL
- HOW
- ▁WEARI
- ▁TOTAL
- ▁VICTOR
- ▁GOVERN
- ▁RAISE
- ▁ABANDON
- ▁ABSURD
- ▁ASPECT
- ▁CRIMINAL
- ▁DEFINITE
- ▁DELIBERAT
- ▁FEATHER
- ▁FLORINA
- ▁MIDNIGHT
- ▁RICHMOND
- ▁SATISFY
- ▁SINGULAR
- ▁STEADILY
- ▁SUPREME
- ▁TIMBER
- ▁PSYCHOLOG
- ▁GESTURE
- ▁VALUABLE
- ▁INTERVAL
- ▁CONFUSION
- ▁FLUTTER
- ▁SACRED
- ▁DISEASE
- ▁UNDERTAKE
- ▁PENETRAT
- ▁MARVEL
- ▁NORTHERN
- ▁GRIEV
- ▁GENIUS
- ▁SADDLE
- ▁NOVEL
- ▁MISERY
- ▁CONVICTION
- ▁SINK
- ▁WAGON
- ▁ARISE
- ▁COMMENT
- ▁BARN
- UPON
- ▁FENCE
- ▁ASSOCIATION
- ▁BONES
- ▁IDLE
- ▁DOUBTFUL
- ▁PREPARATION
- IZZ
- ▁RAIS
- ▁BITTERLY
- ▁JOE
- ▁RELI
- ADI
- ▁METAL
- ▁EXACT
- ▁GLOOM
- FIELD
- ▁DANGLARS
- ▁DISGRACE
- ▁EXAMINATION
- ▁FASCINAT
- ▁GLITTER
- ▁INCREASING
- ▁MESSENGER
- ▁PATRIOT
- ▁PLATFORM
- ▁PROVISION
- ▁QUALITIES
- ▁SELECT
- ▁STEADY
- ▁POVERTY
- ▁POWDER
- ▁PROPHET
- ▁HOLLAND
- ▁TRUNK
- ▁VARIETY
- ▁PLANCHET
- ▁CONQUER
- ▁CONCEIVE
- ▁COMBAT
- ▁STOOP
- ▁SHIRT
- ▁GENERATION
- ▁COMMITTED
- ▁INSULT
- ▁CONFUSED
- ▁RADIAN
- ▁DEBT
- ▁IMITAT
- ▁DART
- ▁CAROLINE
- ▁SWAM
- ▁WREN
- ▁CHILDHOOD
- ▁BRAND
- ▁JOKE
- ▁FRIENDSHIP
- ▁DIRT
- ▁JOLL
- ▁BUSHES
- ▁MINK
- ▁ROUT
- ▁EQUALITY
- ▁HESITATED
- ▁BARK
- ▁ANTI
- ▁STATEMENT
- PHER
- ▁SUNK
- ▁DAT
- ▁BACKWARD
- ▁SUSPECT
- ▁OBJECTION
- ▁RAP
- ▁CHIN
- ▁MATE
- ▁REDUC
- ▁GREGG
- ▁ACCOMPANY
- ▁ANYWHERE
- ▁BENEFIT
- ▁CLERK
- ▁EXPENSE
- ▁FETNAH
- ▁INTERPRET
- ▁LUKASHKA
- ▁NUMEROUS
- ▁SURGEON
- ▁PUZZL
- ▁RESCUE
- ▁GRATEFUL
- ▁APPROV
- ▁RIVAL
- ▁NIECE
- ▁FLOOD
- ▁VANISHED
- ▁ERROR
- ▁BLAZ
- ▁TUMBL
- ▁WENDY
- ▁PERSIST
- ▁CONSOL
- ▁SOAP
- ▁HUMOUR
- ▁FITTED
- ▁HOUSEKEEPER
- ▁ENABL
- ▁OCCASIONALLY
- ▁HATRED
- ▁SWELL
- ▁WORRY
- ▁RUST
- ▁PURSUIT
- ▁INTIMATE
- ▁SEAL
- ▁COLLECTION
- ▁TREMBLED
- ▁DENY
- ▁HUMANITY
- ▁FATAL
- ▁COCK
- ▁DRIVER
- ▁HOPELESS
- ▁MISTAKEN
- ▁LUC
- ▁ACCOMPLISH
- ▁COAL
- ▁ACCORD
- ▁PURSE
- ▁SEPARATE
- ▁ARRIVE
- ▁SMOK
- ▁MADAM
- ▁ASSOCIAT
- ▁INSTRUCT
- ▁CELEBR
- ▁CHANNEL
- ▁CIVILIZATION
- ▁DOCTRINE
- ▁ENDEAVOUR
- ▁GLACIER
- ▁INTELLIGENT
- ▁INVOLVE
- ▁LEATHER
- ▁MUTTERED
- ▁OLENIN
- ▁PENCROFT
- ▁PERPLEX
- ▁SPECTATOR
- ▁UNIVERSITY
- ▁ATTAIN
- ▁INEVITABL
- ▁YONDER
- ▁ENCHANT
- ▁REPAIR
- ▁CURRENT
- ▁ASCEND
- ▁CREEK
- ▁SPARKL
- ▁RUE
- ▁BEAVER
- ▁INFANT
- ▁CONTINUALLY
- ▁CLASP
- ▁IRISH
- ▁ROLLIN
- ▁PUNISHMENT
- ▁LUNCH
- ▁AGONY
- ▁RUDE
- ▁DRAGG
- ▁INQUIRI
- ▁SEX
- ▁TERRIFI
- ▁ROBIN
- ▁PROFESSIONAL
- ▁SPUR
- ▁GRAIN
- ▁VINE
- ▁PENN
- ▁ROC
- ▁CHASE
- ▁INFORM
- ▁WRITER
- ▁AVO
- ▁TAP
- ▁CREAT
- ▁WHIL
- ▁BARR
- ▁ASSURE
- ▁CIRCUMSTANCE
- ▁OIL
- ▁ROUSE
- ▁COLUMB
- ▁CUNNING
- ▁DOMESTIC
- ▁GLORIOUS
- ▁INDIGNATION
- ▁PRECISELY
- ▁PRUDENCE
- ▁RAILROAD
- ▁SATURDAY
- ▁UTMOST
- ▁VIOLENCE
- ▁WHIRL
- ▁CALCULAT
- ▁OVERWHELM
- ▁PERPETUAL
- ▁QUARLES
- ▁SLENDER
- ▁TELEGRAPH
- ▁ALOUD
- ▁OPPRESS
- ▁CROPPER
- ▁CANADIAN
- ▁HERBERT
- ▁TIMID
- ▁SUPPLY
- ▁STROLL
- ▁CREEP
- ▁OATH
- ▁DUSK
- ▁EXCESS
- ▁HUMBLE
- ▁FURIOUS
- ▁RIDGE
- ▁BULLET
- ▁PONY
- ▁STATU
- ▁ENJOYMENT
- ▁CONWAY
- ▁DIFFICULTIES
- ▁PATCH
- ▁JOYCE
- ▁CLOCK
- ▁RESTORED
- ▁ARGU
- ▁WIG
- ▁CHATT
- ▁PLAC
- ▁REMOVE
- ▁TORN
- ▁DISAPPEAR
- TIME
- WELL
- ▁RECOGNIZE
- ▁FISHE
- ▁DECLARE
- ISTIC
- ▁AUTHOR
- ▁WHISK
- ▁COFFEE
- ▁COMPREHEND
- ▁DISGUISE
- ▁ELZEVIR
- ▁ENTERPRISE
- ▁HOLIDAY
- ▁HORIZON
- ▁IGNORANT
- ▁INTERVIEW
- ▁OLIVER
- ▁RONICKY
- ▁CAPACITY
- ▁DISPOSITION
- ▁EXTERNAL
- ▁OPPOSITION
- ▁REPUBLIC
- ▁WHEAT
- ▁CORPSE
- ▁DARLING
- ▁THRILL
- ▁INHABITANTS
- ▁ORNAMENT
- ▁SHIFT
- ▁RECOGNISE
- ▁SHIVER
- ▁BOAST
- ▁HINT
- ▁BOSTON
- ▁MULTI
- IFYING
- ▁STEAL
- ▁INSTRUCTIONS
- ▁ELECTRIC
- ▁SWING
- ▁SOOTH
- ▁SCALE
- ▁MORLAND
- ▁DISLIKE
- ▁FLATTER
- ▁COACH
- ▁LEIF
- ▁STAMP
- ▁ANYHOW
- ▁MOTIONLESS
- ▁ANDREA
- ▁LOSING
- ▁PAUL
- ▁CAROL
- ▁ADVANC
- ▁IMAGIN
- ▁CENTER
- ▁JAR
- ▁SUCCEED
- ▁DISMISS
- CTOR
- ▁RECEIV
- ▁DRAG
- ▁INTENT
- ▁BARBAR
- ▁PUNISH
- ▁ABRUPTLY
- ▁BERNARD
- ▁DECISION
- ▁INDEPENDENT
- ▁PROVINCE
- ▁SLEEVE
- ▁TREMENDOUS
- ▁UNPLEASANT
- ▁LEISURE
- ▁THRONG
- ▁THUMB
- ▁BANNER
- ▁CONTRADICT
- ▁RESTRAIN
- ▁DIVIDED
- ▁WRAPPED
- ▁HAUNT
- ▁SNEER
- CHESTER
- ▁JULIA
- ▁MILD
- ▁CONTACT
- ▁MEANTIME
- ▁NEEDLE
- ▁BLOT
- ▁BARREL
- ▁ISABELLA
- ▁THEATRE
- ▁ESTABLISHMENT
- ▁MARKET
- ▁CHINA
- ▁FORBID
- ▁PERISH
- ▁DOORWAY
- ▁CARLING
- ▁PERIL
- ▁PRIZE
- ▁HATCH
- ▁CURL
- ▁REFER
- ▁DEVOT
- EMBER
- MONT
- ▁CANOE
- ▁PROFESSION
- ▁CONVICT
- ▁CRAWL
- ▁ACTIVITY
- ▁BEWILDER
- ▁BREEZE
- ▁CONTEMPLAT
- ▁DISGUST
- ▁FATIGUE
- ▁MERRICK
- ▁PRAIRIE
- ▁REFORM
- ▁SPECTACLE
- ▁STUDENT
- ▁TUMULT
- ▁UNIFORM
- ▁VIGOROUS
- ▁CONDEMN
- ▁GENUINE
- ▁THOMAS
- ▁ARROW
- ▁PILLOW
- ▁FEEBLE
- ▁RALPH
- ▁SCHEME
- ▁COLLAR
- ▁JUSTINIAN
- ▁NERVE
- ▁OYSTER
- ▁BENNET
- ▁DUTIES
- ▁BINGLEY
- ▁CHRISTMAS
- ▁CONVEY
- ▁DESPIS
- ▁RATTL
- ▁GARMENTS
- ▁GOWN
- ▁BERYL
- ▁BARRIER
- ▁CHARACTERISTIC
- ▁MEDITAT
- ▁DISCOURSE
- ▁STAFF
- ▁KARA
- ▁MONTE
- ▁READILY
- ▁VENTUR
- ▁HENCE
- ▁ROPE
- ▁CRIES
- ▁ANGLE
- ▁RESPECTABLE
- ▁MOAN
- ▁OUTLINE
- BORN
- ▁FIX
- ▁INTEND
- LIA
- ▁CHILL
- ▁CREP
- ▁CHOSE
- ▁SPECULAT
- ▁ATTRIBUT
- ▁BUFFALO
- ▁ENTREAT
- ▁ENVELOP
- ▁FREDERICK
- ▁IMPATIENCE
- ▁INDIFFERENCE
- ▁INDUSTRY
- ▁INSTITUTION
- ▁LYNDE
- ▁RETAIN
- ▁TROUTINA
- ▁UNCOMFORTABL
- ▁VENGEANCE
- ▁JENKS
- ▁CONGRESS
- ▁SMART
- ▁THITHER
- ▁DISAGREE
- ▁IMPROVEMENT
- ▁PISTOL
- ▁GOSSIP
- ▁ETERNAL
- ▁BELIEF
- ▁SLEDGE
- ▁AROUSED
- ▁ORANGE
- ▁FASTENED
- ▁MONKEY
- ▁WITHDREW
- ▁OFFEND
- ▁PIERC
- ▁MOONLIGHT
- ▁OARS
- ▁GROOM
- ▁FIDDLER
- ▁BARBARA
- SHIRE
- ▁ATTENDANT
- ▁DIVERS
- ▁DUCK
- ▁PROPOSAL
- ▁GROWTH
- ▁CURATE
- ▁STEWAR
- ▁MOCK
- ▁SUCCESSION
- ▁CREATION
- ▁PARTIAL
- ▁SWU
- ▁FROST
- ▁EIGHTH
- ▁AWE
- ▁PERCH
- ▁LACE
- SPOON
- ▁ARRANGE
- SERIES
- ▁FOG
- ▁SCU
- ▁ABRAHAM
- ▁ADMIRAL
- ▁BARBICANE
- ▁CAMPAIGN
- ▁CONSEQUENTLY
- ▁CULTURE
- ▁GRAMMONT
- ▁GWYNPLAINE
- ▁HAPPILY
- ▁HOOPDRIVER
- ▁INDEPENDENCE
- ▁LEOPOLD
- ▁MISCHIEF
- ▁MONTGOMERY
- ▁NECESSARILY
- ▁PSYCHIC
- ▁RABBIT
- ▁REFUGE
- ▁RESPONSIBILIT
- ▁SENATOR
- ▁UNCERTAIN
- ▁MENSTRUA
- ▁FANNY
- ▁SUBSTANCE
- ▁APRIL
- ▁ELBOW
- ▁QUALITY
- ▁BORDER
- ▁BRUTAL
- ▁CARPET
- ▁SOLITAR
- ▁FROWN
- ▁SCENT
- ▁ANNOY
- ▁NAKED
- ▁BOSOM
- ▁CONSUM
- ▁TIGER
- ▁ITALIAN
- ▁PARSON
- ▁DECLIN
- ▁NEIGHBORHOOD
- ▁GREGGORY
- ▁EXCEED
- ▁SILLY
- ▁ICELAND
- ▁HIDEOUS
- ▁STRU
- ▁ALTERNAT
- ▁CABINET
- ▁ABILITY
- ▁BEECH
- ▁SECRETARY
- ▁CONTEST
- ▁MONK
- ▁PADD
- ▁EVA
- ▁CREST
- ▁FINISH
- ▁APPARENT
- ▁MIX
- ▁SLIP
- ▁LUXURI
- ▁AUTUMN
- ▁CIRCULAR
- ▁COMPOSITION
- ▁DISPLEAS
- ▁EXCELLENC
- ▁FURNITURE
- ▁GRADUATE
- ▁INDIFFERENT
- ▁JOSEPH
- ▁OCCUPATION
- ▁POSSIBILITY
- ▁RENEWED
- ▁RESPONDED
- ▁PREVAIL
- ▁HOARSE
- ▁PRACTIS
- ▁FAREWELL
- ▁JULIET
- ▁OVERHEAD
- ▁THREAD
- ▁APPLICATION
- ▁SOLITUDE
- ▁ADAPT
- ▁FALK
- ▁LARK
- ▁COARSE
- ▁MANKIND
- ▁KICK
- ▁BATTER
- ▁SOLICIT
- ▁RESIGN
- ▁MOTOR
- ▁STEEL
- ▁CONTRIV
- ▁AUTHORITIES
- ▁HARSH
- ▁FAVORITE
- ▁TALENT
- ▁FLEECE
- ▁AGITATION
- ▁ABBE
- ▁STUCK
- ▁HEDGE
- ▁BIBLE
- ▁RECOLLECTION
- ▁PARTNER
- ▁DAMON
- ▁SHINE
- ▁HOOK
- ▁CONFESSION
- ▁ASSENT
- ▁ELDE
- ▁BIGGE
- ▁PEACEFUL
- SCRIBED
- ▁WEIGH
- CARLET
- ▁DECIDE
- ▁RECOLLECT
- ▁BOHEMIA
- ▁CALIFORNIA
- ▁CONSTRUCT
- ▁DEMONSTRAT
- ▁DISTRIBUT
- ▁FRIGHTFUL
- ▁GNOME
- ▁IGNORANCE
- ▁JANUARY
- ▁JULIUS
- ▁MEMORIES
- ▁OCCUPY
- ▁PHRASE
- ▁WHIRLWIND
- ▁WILMINGTON
- ▁CARLINI
- ▁CHAUVELIN
- ▁ESTEEM
- ▁GENZABURO
- ▁GLOBE
- ▁LECOQ
- ▁MARGARET
- ▁MONARCH
- ▁NAPOLEON
- ▁SCORN
- ▁STAGGER
- ▁SUSTAIN
- ▁TRADITION
- ▁ADJUST
- ▁FROZEN
- ▁IMPRISON
- ▁LANTERN
- ▁MICHEL
- ▁STOMACH
- ▁TORRENT
- ▁WITHDRAW
- ▁FRANZ
- ▁POISON
- ▁SURVEY
- ▁BRITISH
- ▁ELEVAT
- ▁AWOKE
- ▁ESTHER
- ▁INHERIT
- ▁TRAVERS
- ▁STOPPING
- ▁IRELAND
- ▁COMPARATIVE
- ▁SOBB
- ▁FAVOURITE
- ▁CANVAS
- ▁CLOAK
- ▁GLAR
- ▁ASSISTANT
- ▁DAMAGE
- ▁PEAK
- ▁DISTINCTION
- FARE
- ▁DOLLAR
- ▁BEGGAR
- LUSIVE
- ▁MODEL
- ▁SECUR
- ▁DISPOS
- ▁SLID
- ▁PEA
- ▁SPEEDI
- HOLD
- ▁SNAP
- ▁CIGAR
- ▁AFFLICT
- ▁AMAZEMENT
- ▁LAUNCELOT
- ▁LEAGUE
- ▁MARIPOSA
- ▁POPULATION
- ▁UNEASY
- ▁BLOSSOM
- ▁CATERPILLAR
- ▁INCLINATION
- ▁SUSPEND
- ▁SYNDIC
- ▁TAYLOR
- ▁WILSON
- ▁CONTRAST
- ▁PORTRAIT
- ▁CORONER
- ▁GREEK
- ▁BUNDLE
- ▁BLEW
- ▁THORPE
- ▁ORPHAN
- ▁MUSCLE
- ▁DEAF
- ▁SURVIV
- ▁EXCEEDINGLY
- ▁TENDENC
- ▁ISRAEL
- ▁QUANTIT
- ▁PENSION
- ▁DRIED
- TEXT
- ▁REFERENCE
- ▁REPOSE
- ▁FOLLY
- ▁REPLACE
- ▁TERR
- ▁ANKLE
- ▁SUNLIGHT
- ▁SECURITY
- ▁SHOV
- ▁RAW
- CULAR
- ▁JACKET
- ▁TUNE
- ▁HOBB
- ▁MARTIN
- DUCED
- ▁FIST
- ▁BEGG
- ▁CHOK
- ▁INQUIRE
- ▁INTELLECT
- ▁AMUSEMENT
- ▁APPROPRIATE
- ▁CONGRATULAT
- ▁CONVENTION
- ▁DISCOURAG
- ▁EXQUISITE
- ▁FOUNTAIN
- ▁JUNIOR
- ▁NONSENSE
- ▁OBSTACLE
- ▁SPECIMEN
- ▁SWEAR
- ▁TRANQUIL
- ▁VEHICLE
- ▁WISDOM
- ▁ASCERTAIN
- ▁CAUTIOUS
- ▁CENTURIES
- ▁CORRUPT
- ▁EXPLOR
- ▁TURKEY
- ▁BARGAIN
- ▁CONFOUND
- ▁FUNCTION
- ▁GRACIOUS
- ▁MONICA
- ▁ILLUSTRAT
- ▁CRUMB
- ▁REMEDY
- ▁REMOTE
- ▁REVENGE
- ▁BABYLON
- ▁CAUTION
- ▁INTERIOR
- ▁CRISTEL
- ▁BRAZ
- ▁THIRST
- ▁PROBABLE
- ▁HARMONY
- ▁CHARITY
- ▁DECAY
- ▁COLONI
- ▁AVAIL
- ▁REPULS
- ▁ABSENT
- ▁PULSE
- ▁PRESUM
- ▁CRANE
- ▁NEIGHBOURHOOD
- ▁SUNSET
- ▁CANNON
- ▁GRAPE
- ▁SOFA
- ▁DRANK
- MINOUS
- ▁DECLARATION
- ▁CLOSING
- ▁MEEK
- ▁STARV
- ▁BUNCH
- ▁PERFORMANCE
- ▁ENTERTAINMENT
- ▁STRIV
- ▁EMILY
- ▁VALET
- MPOSED
- ▁INTIMA
- ▁POLISH
- ▁HIRE
- POST
- ▁TREMBLE
- ▁CEASE
- ▁VIRGIN
- ▁RUSSIA
- COURSE
- ▁EDUCAT
- BOUND
- ▁INHABIT
- ▁SUPERINTEND
- ▁BISCUIT
- ▁CHICAGO
- ▁CHOKICHI
- ▁CONFLICT
- ▁ENCLOS
- ▁EXCLUSION
- ▁EXECUTIVE
- ▁GRANDMOTHER
- ▁HEADQUARTERS
- ▁INFERIOR
- ▁INVISIBLE
- ▁MUTUAL
- ▁OPPONENT
- ▁SENSITIVE
- ▁STUDIED
- ▁TEMPORARY
- ▁UNWILLING
- ▁PERMANENT
- ▁BEDROOM
- ▁NOVEMBER
- ▁COMPLICAT
- ▁DEVOUR
- ▁SCRAMBL
- ▁SECTION
- ▁PROPOSITION
- ▁DEPRIV
- ▁RYNCH
- ▁PLEAD
- ▁TORTURE
- ▁SCOUT
- ▁PILOT
- ▁CHERISH
- ▁SPEAR
- ▁SUGAR
- ▁JASPER
- ▁STRAY
- ▁RIFLE
- ▁NORMAL
- ▁JERK
- ▁HONEY
- ▁AWAKENED
- ▁QUIVER
- ▁PYE
- ▁APPLY
- LICK
- JA
- ▁ANNOUNC
- FORE
- ▁ENGINE
- ▁HESITATE
- ▁PROVIDE
- ▁REALIZE
- ▁SEIZE
- ▁RESTORE
- MOUTH
- FOOT
- ▁DIFFER
- ▁ULTIMATE
- ▁ABUNDANCE
- ▁APPRECIATE
- ▁APPREHENSION
- ▁AVENUE
- ▁AWKWARD
- ▁CETERA
- ▁CHIMNEY
- ▁CLUTCH
- ▁CONVENIENT
- ▁CORRIDOR
- ▁DISTRACT
- ▁ELEGANT
- ▁ELSEWHERE
- ▁ENTHUSIASTIC
- ▁EXECUTE
- ▁EXTREMIT
- ▁JERUSALEM
- ▁MIRACLE
- ▁MONSTROUS
- ▁OBEDIENCE
- ▁OBSCURE
- ▁PHENOMENA
- ▁RESIDENCE
- ▁RESOURCE
- ▁REVOLT
- ▁SCIENTIFIC
- ▁SHIELD
- ▁SIMPSON
- ▁UNIVERSE
- VOLUNTARY
- ▁ATTENTIVE
- ▁BRENDA
- ▁DEPOSIT
- ▁MAXIM
- ▁REJECT
- ▁STIRRED
- ▁DISORDER
- ▁SERENE
- ▁TOBACCO
- ▁MILTON
- ▁BALLOON
- ▁STEPHEN
- ▁STRAIT
- ▁CHINESE
- ▁COURTEOUS
- ▁RELEASE
- ▁RECESS
- ▁COTTON
- ▁STUMP
- ▁TANK
- ▁PROMOTE
- ▁DERIVE
- ▁LOYAL
- ▁GRANIT
- ▁DISMAL
- ▁CATTLE
- ▁DOONE
- ▁CUPID
- DIGNIFIED
- ▁RIPE
- ▁EXILE
- ▁ANTIQU
- UMINAT
- ▁SUPPOS
- ▁WRETCH
- ▁IDENTI
- ▁EASI
- ▁SERV
- ▁QUEST
- TOWN
- ▁ACHIEVEMENT
- ▁APPETITE
- ▁BUCCANEER
- ▁COMMENCED
- ▁DELAWARE
- ▁DISCERN
- ▁IMMORTAL
- ▁INDIGNANT
- ▁JOSIANA
- ▁MECHANICAL
- ▁MUSKRAT
- ▁REVIEW
- ▁ROBARTS
- ▁SIGNIFICANT
- ▁SUBSEQUENT
- ▁YOURSELVES
- ▁ANGRILY
- ▁BORROW
- ▁SUBLIME
- ▁AFRICA
- ▁CHICKEN
- ▁DEGRAD
- ▁GEORGI
- ▁HUMILIAT
- ▁LODGING
- ▁REDCOAT
- ▁VIOLET
- ▁HOPKINS
- ▁RAWDON
- ▁PRICK
- ▁WHALE
- ▁FUNERAL
- ▁GUINEA
- ▁DISMAY
- ▁PORCH
- ▁HARVEST
- ▁PARCEL
- ▁SUBDU
- ▁SYRIA
- ▁PANIC
- ▁BOUGHS
- ▁CIGARETTE
- ▁CHRON
- ▁INQUIRY
- ▁CRYSTAL
- ▁SPELL
- ▁PLUCK
- ▁PATTERN
- ▁DARING
- ▁CRITICISM
- ▁DAINT
- ▁DISTURBANCE
- ▁BUTCHER
- ▁LITERA
- ▁ABUSE
- IXTURE
- ▁ANIMAT
- ▁WRIT
- ▁BELIEV
- ▁INDUCE
- COMING
- ▁DRAMA
- ▁AGITAT
- SHAW
- ▁IMPERFECT
- ▁MANUFACTURE
- ▁AFFIRM
- ▁ANGUISH
- ▁ARTIFICIAL
- ▁BIBBS
- ▁CHARLOTTE
- ▁CIRCUS
- ▁CONNISTON
- ▁CONSTITUTE
- ▁DAZZL
- ▁DEFECT
- ▁DISCHARG
- ▁ESCORT
- ▁EXAGGERAT
- ▁GWENDOLEN
- ▁IRRESISTIBL
- ▁PHILOSOPHY
- ▁PHOTOGRAPH
- ▁PILGRIM
- ▁PLEASING
- ▁QUIXOTE
- ▁RESPONSE
- ▁SCRATCH
- ▁SERGEANT
- ▁SHERIFF
- ▁SHUDDER
- ▁STRUCTURE
- ▁SUFFRAGE
- ▁SURRENDER
- ▁SWORE
- ▁VILLAIN
- ▁HESITATING
- ▁FLORENCE
- ▁IRRITAT
- ▁RIGID
- ▁SINISTER
- ▁STUDIO
- ▁RAFT
- ▁CHAMPION
- ▁PAVEMENT
- ▁WOLF
- ▁DEVICE
- ▁WRECK
- ▁HESITATION
- ▁LAZY
- ▁ADJO
- ▁DECENT
- ▁INTERVEN
- ▁WOOL
- ▁ILLUSION
- ▁HAWK
- ▁IMPART
- ▁LUNGS
- ▁WINNING
- ▁VITAL
- ▁CONSPI
- ▁SUBTLE
- ▁CONSTANC
- ▁HURL
- ▁AMIABL
- ▁FOLK
- GGY
- ▁NECESSIT
- ▁PROFESS
- WASH
- ▁ADMIRING
- ▁AMBITIOUS
- ▁ANTHONY
- ▁CEREMONY
- ▁CONTRIBUTE
- ▁CRAGGS
- ▁DETAIN
- ▁DISCLOS
- ▁DWELT
- ▁EGYPT
- ▁FELIX
- ▁JOURNAL
- ▁KWAIRYO
- ▁LIBERAL
- ▁LUMBER
- ▁OCTOBER
- ▁ORGANIZATION
- ▁POPULACE
- ▁PRECAUTION
- ▁PREJUDICE
- ▁PROCLAIM
- ▁PROPRIETOR
- ▁RESPONSIBLE
- ▁RHYTHM
- ▁RIDICULOUS
- ▁SCHOLAR
- ▁SQUEEZ
- ▁SUBSTITUTE
- ▁SURPASS
- ▁THRESHOLD
- ▁WHARTON
- ▁FLICKER
- ▁AMAZED
- ▁BRONZE
- ▁COSSACK
- ▁SPILETT
- ▁CASUAL
- ▁DARCY
- ▁PARLOUR
- ▁SEXUAL
- ▁INSECT
- ▁NATHAN
- ▁EMINENT
- ▁PENCIL
- ▁PETITION
- ▁ROTTEN
- ▁VIGIL
- ▁CAESAR
- ▁EAGLE
- ▁TREAD
- ▁REACTION
- ▁TACIT
- ▁PARLOR
- ▁SPAIN
- ▁WILDERNESS
- ▁DICTAT
- ▁GRATIFY
- ▁STOVE
- ▁SKIRT
- ▁UTILI
- ▁CONCERT
- ▁GORGE
- ▁DECORAT
- ▁LATIN
- ▁ANCHOR
- ▁KNOT
- ▁MONDAY
- ▁GABLES
- ▁TOLERABL
- ▁ROGER
- BERRIES
- ▁INVAD
- IMMER
- OMETER
- ▁PRODUC
- OBIL
- ▁PERMISSI
- FICIENCY
- ▁WANDER
- RREL
- PIECE
- HORN
- ▁COMMIT
- ▁ACCUMULAT
- ▁JAPAN
- ▁ABUNDANT
- ▁ACADEMY
- ▁ALBERT
- ▁BANQUET
- ▁DELICIOUS
- ▁DOCUMENT
- ▁EXCLAMATION
- ▁FEBRUARY
- ▁GROTESQUE
- ▁HEATHERSTONE
- ▁HUMPHREY
- ▁HURSTWOOD
- ▁MOHAMMED
- ▁MOSCOW
- ▁NICHOLAS
- ▁OBSTINATE
- ▁PHANTOM
- ▁PHILOSOPHER
- ▁RECEPTION
- ▁SPANIARD
- ▁SWOLLEN
- ▁TELEPHONE
- ▁TRIBUTE
- ▁TUNNEL
- ▁UNREASONABL
- ▁WIGWAM
- ▁BUTTERFLY
- ▁COLLINS
- ▁DISPATCH
- ▁EDITOR
- ▁CONTINENT
- ▁DIMINISH
- ▁HORRID
- ▁KEATS
- ▁PROVIDENCE
- ▁BEHALF
- ▁CHARLEY
- ▁DRAKE
- ▁LAUNCH
- ▁SALOON
- ▁GIGANT
- ▁DISPUTE
- ▁HYSTERI
- ▁DEFENCE
- ▁SCREEN
- ▁VAULT
- ▁NINTH
- ▁HARBOR
- ▁FLANK
- ▁SPECK
- ▁UPRIGHT
- ▁KEMP
- ▁CANADA
- ▁STALK
- ▁OWL
- ▁BRUTE
- ▁FERRIS
- ▁DECREE
- ▁HABITUAL
- ▁BRISK
- ▁INSPIRE
- ▁HUSH
- ▁CROUCH
- ▁FRIDAY
- ▁MOUNTAINEER
- ▁HISTORIC
- ▁BATES
- ▁RUSK
- ▁SEMI
- DICTION
- ▁BUSI
- ▁REMOV
- MMI
- ▁SUFFIC
- ▁FLEE
- ▁LOUIS
- NLEA
- ▁IMPORT
- OLOGY
- ▁CLERGY
- ▁ADVERTISEMENT
- ▁BENEVOLEN
- ▁BORODINO
- ▁CATHOLIC
- ▁COMMERCIAL
- ▁CONJECTURE
- ▁CURTAIN
- ▁CUTHBERT
- ▁DEMOCRACY
- ▁GUARANTEE
- ▁HYPNOSIS
- ▁INDEFINITE
- ▁INVESTIGATION
- ▁IRREGULAR
- ▁KOYO
- ▁MERRIWIG
- ▁MIRANDA
- ▁NICHOLL
- ▁ONLOOKER
- ▁PERSECUT
- ▁RECOGNITION
- ▁REJOICE
- ▁REMEMBRANCE
- ▁REVELATION
- ▁SCOLD
- ▁SENIOR
- ▁SQUIRREL
- ▁SYMPATHETIC
- ▁TEMPEST
- ▁TREACHER
- ▁UNDERNEATH
- ▁UNEASINESS
- ▁UNNECESSARY
- ▁UPSTAIRS
- ▁VEXATION
- ▁ACCESS
- ▁CHEAP
- ▁ESTIMATE
- ▁HAZARD
- ▁HORSEBACK
- ▁PLUNDER
- ▁RASCAL
- ▁ROSTOV
- ▁ACCUR
- ▁GRAVITY
- ▁SITUATED
- ▁INVARIABL
- ▁PLENTIFUL
- ▁SPENCER
- ▁WALLACE
- ▁POLICY
- ▁WARRANT
- ▁ENVY
- ▁LAMB
- ▁EXTRACT
- ▁CORRAL
- ▁PANEL
- ▁LINK
- ▁LILIES
- ▁BECKON
- ▁SENOR
- ▁BORG
- ▁DEBATE
- ▁STEER
- COGNI
- COMB
- ▁SETTL
- ▁VENERA
- ▁FEATURE
- ▁TERRIBL
- CAPABLE
- OLOGICAL
- ▁INCESSANT
- ▁RESOLUTE
- SHAUGHNESSY
- ▁ABOLITION
- ▁ASSASSIN
- ▁BEHAVIOUR
- ▁BLUNT
- ▁COMMERCE
- ▁CONSTANTINOPLE
- ▁CRICKET
- ▁DISCIPLINE
- ▁DROUET
- ▁DWARF
- ▁INJUSTICE
- ▁LUXURY
- ▁MANUSCRIPT
- ▁MISUNDERSTAND
- ▁POLITICIAN
- ▁REDOUBT
- ▁SALVATION
- ▁SERMON
- ▁STRUGGLING
- ▁SURPRISING
- ▁TRIGGER
- ▁TUESDAY
- ▁TWILIGHT
- ▁UNDOUBTEDLY
- ▁VEGETABLE
- ▁VULGAR
- ▁WAISTCOAT
- ▁WRINKLE
- ▁ALEXANDER
- ▁CEILING
- ▁ECONOMIC
- ▁EVERLASTING
- ▁INFLICT
- ▁LEVISON
- ▁LOBSTER
- ▁OVERFLOW
- ▁SNATCH
- ▁TRAGEDY
- ▁DEASEY
- ▁ENLIGHTEN
- ▁FRIGATE
- ▁INSPECT
- ▁MARVELLOUS
- ▁ATLANTIC
- ▁LUFTON
- ▁BLADE
- ▁CRASH
- ▁SLAUGHTER
- ▁ANNUAL
- ▁CONFERENCE
- ▁TWIG
- ▁REASSUR
- ▁UNIQUE
- ▁WRATH
- ▁CRADLE
- ▁HULLO
- ▁LIQUID
- ▁MIRTH
- ▁EXPERT
- ▁HARVEY
- ▁RESTORATION
- ▁PRETTI
- ▁APOLOGY
- ▁SLAIN
- ▁BARBER
- ▁UPROAR
- ▁SCANT
- ▁BADGER
- ▁GROCER
- ▁ACRES
- ▁BRIDLE
- ▁SPECIFI
- ▁TANGLE
- ▁FERTIL
- ▁PATRON
- WIXT
- LAMOUR
- ▁DARN
- ▁POPE
- ▁PERCEIV
- ▁CONCLUDE
- ▁SIMPL
- ▁GUILT
- ▁CARRIE
- EFFICIENT
- SGIVING
- ▁APPOINTMENT
- ▁APPRECIATION
- ▁CARTRIDGE
- ▁CHALLENGE
- ▁CRAYFISH
- ▁CRIMSON
- ▁CUCUMETTO
- ▁ENERGETIC
- ▁EPOCH
- ▁EXAMINING
- ▁EXTENSIVE
- ▁EXTINGUISH
- ▁GLOODY
- ▁INSIGNIFICANT
- ▁LANDLORD
- ▁LANGUID
- ▁LEGISLATURE
- ▁MAJESTIC
- ▁PACIFIC
- ▁PASTRINI
- ▁PHRONSIE
- ▁RECONCIL
- ▁SIMULTANEOUS
- ▁SKELETON
- ▁SKETCH
- ▁TRANSFORM
- ▁UNJUST
- ▁VEXED
- ▁ASYLUM
- ▁CLUSTER
- ▁ERRAND
- ▁EXPEND
- ▁NEGATIVE
- ▁NORHALA
- ▁SCANDAL
- ▁STIMULAT
- ▁SWEAT
- ▁COMPOUND
- ▁DECEMBER
- ▁EXPAND
- ▁PROLONG
- ▁PURITAN
- ▁CONQUEST
- ▁MAGUA
- ▁SANCHO
- ▁TRENCH
- ▁ENTITLE
- ▁PEPPER
- ▁DISASTER
- ▁REGAIN
- ▁SHREWD
- ▁SULLEN
- ▁CLAVIER
- ▁COLOSS
- ▁SHILLING
- ▁ETHEL
- ▁MYSTERIES
- ▁BULK
- ▁GRANDEUR
- ▁AGNES
- ▁CONVERT
- ▁WRIST
- ▁GLID
- ▁TERRACE
- ▁SONYA
- ▁DANTES
- ▁MOULD
- ▁MAGNET
- ▁PLOT
- RANK
- ▁CAVIT
- ▁SUBSID
- ▁SLAP
- TURNED
- ▁THREAT
- BREAK
- ▁ANCESTORS
- ▁ANTICIPATED
- ▁APPLAUSE
- ▁ASSAULT
- ▁ATTORNEY
- ▁AUTOMATIC
- ▁CARAVAN
- ▁CATASTROPHE
- ▁CAVALCANTI
- ▁CROMWELL
- ▁ENVOY
- ▁EXHAUSTION
- ▁FIEND
- ▁GENEROSITY
- ▁GIMBLET
- ▁HARDQUANONNE
- ▁HOUARN
- ▁INJURY
- ▁MACKINSON
- ▁OGLETHORPE
- ▁PETTICOAT
- ▁RASPBERR
- ▁REHNHJELM
- ▁REJOICING
- ▁REMNANT
- ▁SCOTLAND
- ▁SHRINK
- ▁STANDPOINT
- ▁TESTIMONY
- ▁THEREAFTER
- ▁THIRTIETH
- ▁TWENTIETH
- ▁TYRANT
- ▁VENTNOR
- ▁VETERAN
- ▁WHITTAKER
- ▁ZVERKOV
- ▁ARCHITECTUR
- ▁BLUNDER
- ▁DENSHER
- ▁FORTNIGHT
- ▁JUDITH
- ▁MARIANNE
- ▁MEMORABLE
- ▁REFINED
- ▁REVOLV
- ▁UNDERTAKING
- ▁CLUMP
- ▁GRUMBLE
- ▁SYMPATHI
- ▁TICKET
- ▁TWITCH
- ▁EDITION
- ▁FALANDER
- ▁CARTHAGE
- ▁ORLEANS
- ▁POSSUM
- ▁SWITCH
- ▁CLUNG
- ▁CARDINAL
- ▁GNAW
- ▁LOCATED
- ▁HARROW
- ▁RASH
- ▁SIEGE
- ▁LOAF
- ▁BRUISE
- ▁REGULAT
- ▁RESORT
- ▁SARAH
- ▁LEVIN
- ▁NAVY
- ▁MOOSE
- ▁STOOL
- ▁CHANCELLOR
- ▁INGENIOUS
- ▁CHALK
- ▁PRETENCE
- ▁REPAY
- ▁ROAST
- ▁PLUTO
- ▁BAFFL
- ▁STUMBL
- ▁SPHERE
- ▁PLEDGE
- ▁SPRAWL
- ▁WRAP
- ▁FRINGE
- ▁DREAR
- ARRINGTON
- ▁FEDERA
- KEEPER
- ▁PHYSIC
- ▁ADVENT
- HUMAN
- OLOGIST
- ▁ALEXANDR
- ▁APPARITION
- ▁BARTHOLEMY
- ▁CITOYEN
- ▁CLIMATE
- ▁CONTEMPORAR
- ▁DESOLATE
- ▁DISCONTENT
- ▁ELEPHANT
- ▁FERNANDO
- ▁FERRALTI
- ▁FOLIAGE
- ▁FUGITIVE
- ▁GAMBLING
- ▁INVOLUNTARILY
- ▁LABYRINTH
- ▁LEGITIMATE
- ▁MILLIONAIRE
- ▁PERCEPTION
- ▁PROPRIETY
- ▁REBELLION
- ▁REFRAIN
- ▁RUGGLES
- ▁SCRIPTURE
- ▁SPLENDOR
- ▁SQUADRON
- ▁STRICKEN
- ▁SWARM
- ▁THEODORA
- ▁TOMORROW
- ▁VELVET
- ▁WOLVES
- ▁DISREGARD
- ▁GLIMMER
- ▁SHROUD
- ▁TWINKLING
- ▁UNEQUAL
- ▁CHANNING
- ▁CLUMS
- ▁ENIGMA
- ▁NAVIGAT
- ▁TARKAS
- ▁TEMPERATURE
- ▁DIVISION
- ▁GRATIFICATION
- ▁MONUMENT
- ▁SQUEAK
- ▁KAVIN
- ▁INTERPOSE
- ▁THORNTON
- ▁SOLUTION
- ▁STREAK
- ▁SHRILL
- ▁APRON
- ▁PITEOUS
- ▁HAUGHTY
- ▁RECKLESS
- ▁EMPTI
- ▁WADMAN
- ▁BONNET
- ▁MARTHA
- ▁DUMB
- ▁SHATTER
- ▁ACUTE
- ▁BRINK
- ▁CAPRICE
- ▁HURON
- ▁INFERN
- ▁FOWL
- ▁ENRAGE
- ▁ADORN
- ▁CRUIS
- ▁PROBABILIT
- ▁EXPIR
- ▁IMPETU
- ▁OVERHEAR
- BURTON
- ▁TRANSLAT
- ▁ENGAGE
- ▁CONVINCE
- ▁ABNORMAL
- ▁GESTICULAT
- ▁ABOMINABL
- ▁ADVERSARY
- ▁ADVERTISER
- ▁ADVERTISING
- ▁ANNIHILAT
- ▁ARTILLERY
- ▁CATHEDRAL
- ▁COMPETITOR
- ▁COULSON
- ▁CREVICE
- ▁CUSHION
- ▁DEBRAY
- ▁DEJECT
- ▁DIETRICH
- ▁DISADVANTAGE
- ▁ELLISON
- ▁EMPHASIS
- ▁EXCURSION
- ▁FANTASTIC
- ▁HYPOTHES
- ▁INCONVENIENCE
- ▁INDESCRIBABLE
- ▁INDUSTRI
- ▁INVALID
- ▁MERCILESS
- ▁MESOPOTAMIA
- ▁MOSQUITO
- ▁NARRATIVE
- ▁NOWADAYS
- ▁OPPORTUNITIES
- ▁PROMISING
- ▁RECTANGLE
- ▁REMONSTRANCE
- ▁RESTAURANT
- ▁RIBBON
- ▁SCIENTIST
- ▁SHALMANESER
- ▁SKULL
- ▁SPRUCE
- ▁SUBSTANTIAL
- ▁SYMBOL
- ▁TEAPOT
- ▁TERRITORY
- ▁TRAFFIC
- ▁TREASON
- ▁TRUMPET
- ▁TYRANN
- ▁UNANIMOUS
- ▁UNAWARE
- ▁VICINITY
- ▁WREATH
- ▁ZADIG
- ▁CHATEAU
- ▁CONFRONT
- ▁DUCHESS
- ▁EMBODI
- ▁FEMININ
- ▁FURNACE
- ▁MONTONI
- ▁RENOWN
- ▁SMASH
- ▁HARVARD
- ▁NEWBERRY
- ▁PERFUME
- ▁SIGNATURE
- ▁SPLASH
- ▁SUPPOSITION
- ▁HARBOUR
- ▁ASSURANCE
- ▁BRISTOL
- ▁BUCKINGHAM
- ▁DUDLEY
- ▁INTENSITY
- ▁CHOPIN
- ▁ENLIST
- Q
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe5000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ali2066/finetuned_token_2e-05_16_02_2022-01_30_30
|
ali2066
| 2022-02-16T00:32:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-01_30_30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-01_30_30
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1748
- Precision: 0.3384
- Recall: 0.3492
- F1: 0.3437
- Accuracy: 0.9442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3180 | 0.0985 | 0.1648 | 0.1233 | 0.8643 |
| No log | 2.0 | 76 | 0.2667 | 0.1962 | 0.2698 | 0.2272 | 0.8926 |
| No log | 3.0 | 114 | 0.2374 | 0.2268 | 0.3005 | 0.2585 | 0.9062 |
| No log | 4.0 | 152 | 0.2305 | 0.2248 | 0.3247 | 0.2657 | 0.9099 |
| No log | 5.0 | 190 | 0.2289 | 0.2322 | 0.3166 | 0.2679 | 0.9102 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
explosion/en_healthsea
|
explosion
| 2022-02-15T23:40:53Z | 14 | 5 |
spacy
|
[
"spacy",
"token-classification",
"text-classification",
"en",
"model-index",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- spacy
- token-classification
- text-classification
language:
- en
model-index:
- name: en_healthsea
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 80.77
- name: NER Recall
type: recall
value: 79.92
- name: NER F Score
type: f_score
value: 80.34
---
# Welcome to Healthsea ✨
Create better access to health with machine learning and natural language processing. This is the trained healthsea pipeline for analyzing user reviews to supplements by extracting their effects on health. This pipeline features a trained NER model and a custom Text Classification model with Clause Segmentation and Blinding capabilities.
> Read more in the [blog post](https://explosion.ai/blog/healthsea) and visit the [healthsea repository](https://github.com/explosion/healthsea) for all training workflows, custom components and training data.
| Feature | Description |
| --- | --- |
| **Name** | `en_healthsea` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.2.0,<3.3.0` |
| **Default Pipeline** | `sentencizer`, `tok2vec`, `ner`, `benepar`, `segmentation`, `clausecat`, `aggregation` |
| **Components** | `sentencizer`, `tok2vec`, `ner`, `benepar`, `segmentation`, `clausecat`, `aggregation` |
| **Vectors** | 684830 keys, 684830 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | MIT |
| **Author** | [Explosion](explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (6 labels for 2 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `BENEFIT`, `CONDITION` |
| **`clausecat`** | `POSITIVE`, `NEUTRAL`, `NEGATIVE`, `ANAMNESIS` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 80.34 |
| `ENTS_P` | 80.77 |
| `ENTS_R` | 79.92 |
| `CATS_SCORE` | 74.87 |
| `CATS_MICRO_P` | 82.17 |
| `CATS_MICRO_R` | 80.85 |
| `CATS_MICRO_F` | 81.51 |
| `CATS_MACRO_P` | 78.01 |
| `CATS_MACRO_R` | 72.41 |
| `CATS_MACRO_F` | 74.87 |
| `CATS_MACRO_AUC` | 92.76 |
| `CATS_LOSS` | 297.22 |
|
huggingartists/led-zeppelin
|
huggingartists
| 2022-02-15T22:19:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/led-zeppelin",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/led-zeppelin
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e4763bba12e6411077a3e573cd290da0.433x433x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Led Zeppelin</div>
<a href="https://genius.com/artists/led-zeppelin">
<div style="text-align: center; font-size: 14px;">@led-zeppelin</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Led Zeppelin.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/led-zeppelin).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/led-zeppelin")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/cpexpb1w/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Led Zeppelin's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/bna2epba) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/bna2epba/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/led-zeppelin')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/led-zeppelin")
model = AutoModelWithLMHead.from_pretrained("huggingartists/led-zeppelin")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
Leostronkest/DialoGPT
|
Leostronkest
| 2022-02-15T21:59:14Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"conversational",
"arxiv:1911.00536",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
* Multi-turn generation examples from an interactive environment:
|Role | Response |
|---------|--------|
|User | Does money buy happiness? |
| Bot | Depends how much money you spend on it .|
|User | What is the best way to buy happiness ? |
| Bot | You just have to be a millionaire by your early 20s, then you can be happy . |
|User |This is so difficult ! |
| Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
Sourabh714/distilbert-base-uncased-finetuned-squad
|
Sourabh714
| 2022-02-15T20:47:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2188 | 1.0 | 5533 | 1.1708 |
| 0.9519 | 2.0 | 11066 | 1.1058 |
| 0.7576 | 3.0 | 16599 | 1.1573 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
espnet/roshansh_how2_asr_raw_ft_sum_valid.acc
|
espnet
| 2022-02-15T19:51:13Z | 0 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-summarization",
"en",
"dataset:how2",
"arxiv:2110.06263",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-summarization
language: en
datasets:
- how2
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/roshansh_how2_asr_raw_ft_sum_valid.acc`
This model was trained by roshansh-cmu using how2 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout e6f42a9783a5d9eba0687c19417f933e890722d7
pip install -e .
cd egs2/how2/sum1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/roshansh_how2_asr_raw_ft_sum_valid.acc
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Feb 7 15:24:21 EST 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `04561cdf3b6c3bc1d51edb04c93b953759ef551d`
- Commit date: `Mon Feb 7 09:06:12 2022 -0500`
## asr_raw_ft_sum
|dataset|Snt|Wrd|ROUGE-1|ROUGE-2|ROUGE-L|METEOR|BERTScore|
|---|---|---|---|---|---|---|---|
|decode_sum_asr_model_valid.acc.best/dev5_test_sum|2127|69795|60.72|44.7|56.1|29.36|91.53|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_conformer_vid_lf.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_raw_ft_sum
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 45875
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 10
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 10
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 5000
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- exp/asr_raw_utt_conformer/valid.acc.ave_10best.pth:::ctc
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 60000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_vid_sum/train/speech_shape
- exp/asr_stats_raw_vid_sum/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_vid_sum/valid/speech_shape
- exp/asr_stats_raw_vid_sum/valid/text_shape.bpe
batch_type: length
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_2000h_sum_trim/wav.scp
- speech
- sound
- - dump/raw/tr_2000h_sum_trim/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/cv05_sum_trim/wav.scp
- speech
- sound
- - dump/raw/cv05_sum_trim/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.5
patience: 1
token_list:
- <blank>
- <unk>
- '[hes]'
- S
- ▁THE
- ▁TO
- ''''
- ▁AND
- ▁YOU
- ▁A
- ▁IT
- T
- ▁THAT
- ▁OF
- ▁I
- ▁IS
- RE
- ▁IN
- ING
- ▁WE
- M
- ▁GOING
- ▁SO
- ▁THIS
- ▁YOUR
- ▁ON
- E
- D
- ▁BE
- ▁CAN
- N
- Y
- O
- ER
- ▁HAVE
- ▁JUST
- ▁FOR
- ▁WITH
- ▁DO
- ED
- ▁ARE
- ▁WANT
- ▁UP
- R
- LL
- P
- ▁
- L
- B
- ▁IF
- C
- ▁ONE
- ▁S
- ▁OR
- A
- ▁GO
- ▁LIKE
- ▁NOW
- ▁HERE
- VE
- LE
- U
- ▁GET
- ▁WHAT
- ▁OUT
- IN
- W
- ▁C
- ▁LITTLE
- ▁THERE
- LY
- ▁AS
- ▁MAKE
- I
- ▁THEY
- ▁MY
- K
- ▁THEN
- ▁BUT
- AL
- G
- ▁ALL
- OR
- ▁BACK
- ▁NOT
- ▁ABOUT
- ▁RIGHT
- ▁OUR
- EN
- ▁SOME
- ▁DOWN
- F
- ▁WHEN
- CH
- ▁F
- ▁HOW
- AR
- ▁WILL
- ▁RE
- CK
- ▁G
- ES
- CE
- ▁TAKE
- ▁AT
- ▁FROM
- ▁WAY
- TER
- ▁SEE
- RA
- ▁USE
- ▁REALLY
- RI
- TH
- ▁TWO
- ▁ME
- ▁VERY
- ▁E
- ▁B
- AT
- ▁THEM
- ▁DON
- ▁AN
- ▁BECAUSE
- ▁MORE
- RO
- H
- 'ON'
- LI
- ▁PUT
- ▁ST
- IL
- ▁BIT
- ▁START
- ▁NEED
- ▁INTO
- UR
- ▁TIME
- ▁OVER
- ▁W
- ▁DE
- ▁LOOK
- ▁THESE
- ▁LET
- ▁GOOD
- ▁ALSO
- AN
- ▁OFF
- ▁HE
- ▁KIND
- ▁SIDE
- ▁CO
- ▁SURE
- ▁AGAIN
- ▁MA
- ▁KNOW
- IT
- ▁WOULD
- IC
- ▁OTHER
- LA
- ▁P
- ▁WHICH
- '-'
- IR
- ▁LA
- ▁HAND
- EL
- ▁LOT
- ▁WHERE
- ▁THREE
- ▁PA
- ION
- LO
- ▁KEEP
- ▁SHOW
- ▁THING
- ▁FIRST
- TE
- ENT
- ATE
- ▁COME
- AD
- ▁GOT
- NG
- ▁NICE
- ▁T
- ET
- ▁MO
- ▁ANY
- ▁ACTUALLY
- ▁DIFFERENT
- ▁SE
- GE
- ▁WORK
- ▁THROUGH
- ▁O
- KE
- V
- ▁AROUND
- ▁BA
- PE
- ▁HI
- ▁BY
- SH
- ATION
- ▁SU
- ▁CA
- ▁D
- ▁LO
- ▁HAS
- ▁LI
- ▁PLAY
- Z
- ▁ADD
- ▁RO
- ▁TA
- AS
- ▁FOUR
- ▁CON
- ▁THOSE
- MP
- NE
- ▁SP
- UT
- ▁GIVE
- ▁WELL
- ▁BALL
- TING
- RY
- X
- ▁HO
- INE
- IVE
- ▁NEXT
- ▁PO
- ▁STEP
- ▁EVEN
- TION
- ▁MI
- MENT
- ▁CUT
- ▁BO
- ▁LINE
- ▁MUCH
- ▁THINGS
- ▁TALK
- UN
- ▁PART
- ▁WAS
- ▁FA
- ▁SOMETHING
- PP
- ANCE
- ND
- DI
- ▁RA
- AGE
- ▁SAME
- ▁EXPERT
- ▁DOING
- ▁LEFT
- IST
- ▁DI
- ▁NO
- RU
- ME
- TA
- UL
- TI
- ▁VILLAGE
- DE
- ERS
- ▁PEOPLE
- ▁TURN
- VER
- ▁FL
- ▁LEG
- ▁ONCE
- ▁COLOR
- ▁PULL
- ▁USING
- VI
- ▁WATER
- ▁SHE
- ▁TOP
- ▁OKAY
- ▁ANOTHER
- ▁THEIR
- ▁SAY
- URE
- ▁HA
- ▁IMPORTANT
- ▁PIECE
- ▁FOOT
- ▁TRA
- ▁SC
- ▁BODY
- ▁SET
- ▁POINT
- ▁HELP
- ▁TODAY
- ▁BRING
- ▁V
- ▁END
- MA
- ▁CH
- ▁MOST
- ▁K
- ▁AHEAD
- ▁HER
- OL
- ▁SA
- AM
- IES
- ▁THINK
- ▁NAME
- ▁TRY
- ▁MOVE
- ONE
- ▁LE
- ▁TOO
- TO
- UM
- ▁PLACE
- ▁COULD
- ▁FIND
- ▁FIVE
- ▁ALWAYS
- ID
- TY
- NT
- ▁FEEL
- ▁HEAD
- ▁THAN
- NA
- ▁EX
- ▁EYE
- ITY
- CI
- OP
- ▁SHOULD
- ▁MIGHT
- ▁HOLD
- ▁CAR
- AND
- ▁GREAT
- ▁RI
- ▁BU
- ▁HIGH
- ▁OPEN
- ▁BEFORE
- US
- ▁FRONT
- ▁LONG
- ▁TOGETHER
- NI
- ▁HAIR
- ▁LIGHT
- ▁TEN
- ▁HIT
- EST
- OUS
- ▁PRETTY
- ▁TYPE
- IP
- CO
- ▁FINGER
- ▁JO
- ▁UN
- ▁PRO
- ▁STRAIGHT
- ▁BEHALF
- ▁TI
- ▁SIX
- ▁CLEAN
- ▁DIS
- ▁DA
- ▁POSITION
- IGHT
- ACT
- ▁CHA
- ▁PE
- GG
- AP
- ▁MEAN
- ▁COMP
- FI
- ▁KNEE
- ▁CALLED
- ▁HANDS
- ▁PRE
- ▁FORWARD
- ▁AREA
- ANT
- ▁TE
- ▁WA
- ▁AFTER
- ▁SMALL
- ▁THROW
- ▁EVERY
- ▁SHOULDER
- NC
- PER
- ▁MAYBE
- ▁ABLE
- ▁BASICALLY
- ▁AM
- ▁READY
- ▁BOTTOM
- IE
- ▁HALF
- FF
- ▁BIG
- ▁EACH
- ▁PUSH
- ▁EIGHT
- ▁NEW
- ▁DONE
- ▁MAY
- ▁GETTING
- HO
- ▁HIS
- ▁HARD
- ▁CLOSE
- ALLY
- ▁SECOND
- ▁FEET
- ICAL
- ▁JA
- ▁PAINT
- ▁LEARN
- ▁SOUND
- HE
- ▁ROLL
- ▁ONLY
- ▁DOESN
- WA
- ▁DRAW
- ▁VI
- ▁DID
- ▁SHA
- ▁CENTER
- CU
- ▁CLIP
- ▁PI
- ▁CARD
- ▁INSIDE
- ▁PERSON
- ▁STILL
- ▁MAKING
- 'NO'
- ▁EVERYTHING
- .
- ▁FUN
- ARD
- ▁REMEMBER
- ▁AWAY
- ATED
- COM
- ▁SEVEN
- ▁BEEN
- ▁MANY
- ABLE
- ▁DAY
- ▁SIT
- IZE
- ▁REAL
- ▁HIP
- ▁BASIC
- ▁KICK
- ▁TU
- ATING
- ▁STICK
- ▁FLAT
- ▁WHO
- END
- HA
- ▁EXP
- ▁PICK
- ▁MIX
- ▁TRI
- ▁BI
- ▁WHOLE
- ▁STRETCH
- ▁BOTH
- ▁PROBABLY
- CA
- ▁HIM
- ▁STRING
- ▁EDGE
- ▁BASE
- ▁COMING
- UGH
- ▁LIFT
- ▁STA
- ▁WORKING
- ▁MU
- ▁QUICK
- ▁SOMETIMES
- ▁HAPPEN
- ▁YOURSELF
- ▁TALKING
- ▁DR
- ▁TELL
- ▁ANYTHING
- ▁BRA
- ▁LOOKING
- ▁SLOW
- ▁NE
- ▁STAND
- NER
- ▁COMES
- ▁GOES
- ISE
- BE
- ▁USED
- ▁UNDER
- ▁BETWEEN
- ▁HU
- ▁CREATE
- ▁NA
- ▁USUALLY
- ▁ARM
- ▁DRY
- ▁RUN
- LING
- ▁BRUSH
- ▁COVER
- ▁HEAR
- ▁DOES
- ▁STAY
- ▁EN
- ▁FOLD
- ▁CHANGE
- ▁LAST
- ▁EASY
- ▁US
- ▁PER
- ▁FACE
- ▁EAR
- ▁TIGHT
- ▁FE
- ▁PIN
- ▁MAN
- ▁BETTER
- ▁CALL
- ▁PRI
- ▁BEST
- ▁KI
- ▁COUPLE
- ▁WHILE
- ▁SHAPE
- ▁GAME
- IV
- ▁SHOT
- ▁PAPER
- ▁OWN
- ▁ALRIGHT
- ▁HAD
- TIC
- ▁BREATH
- ▁TOOL
- '2'
- ▁ENOUGH
- ▁COURSE
- ▁SKIN
- ▁SPIN
- ▁VA
- ▁ARMS
- ▁TEA
- ▁BREAK
- ▁DOG
- ▁1
- QUE
- ▁DROP
- ▁NUMBER
- IG
- ▁RED
- ▁NOTE
- ▁WEIGHT
- WARD
- ▁PLAYING
- ▁FINISH
- ▁MINUTE
- ▁R
- ▁PRESS
- ▁EITHER
- ▁CHE
- ▁PU
- BER
- ▁FEW
- ▁SIZE
- ▁MADE
- ▁LEAVE
- ▁GA
- ▁ALREADY
- ▁GUY
- ▁FAR
- ▁HOME
- ▁BAR
- UP
- ▁GRAB
- ▁MARK
- ▁WHITE
- ▁PROPER
- ▁CAUSE
- ▁OK
- ▁ART
- HI
- ▁SORT
- ▁EXERCISE
- ▁LOWER
- PORT
- ▁PLANT
- ▁BOARD
- ▁CASE
- ▁YEAR
- CENT
- ▁DU
- ▁CHECK
- ▁WHATEVER
- ▁OIL
- ▁IDEA
- ▁SIMPLE
- ▁PRACTICE
- ▁FAST
- '0'
- ▁CONTROL
- ▁J
- ▁KEY
- ▁MIDDLE
- ▁FULL
- ▁GLASS
- ▁OUTSIDE
- ▁LOW
- ▁REST
- ▁STUFF
- ▁ACT
- ▁UNTIL
- ▁BLACK
- ▁POP
- ▁CLICK
- ▁HOLE
- ▁Z
- ▁COUNT
- ▁POT
- ▁ALLOW
- ▁HAVING
- ▁TRYING
- ▁MUSCLE
- ▁GU
- ▁BOX
- ▁NOTICE
- ▁EXAMPLE
- UND
- ▁ALONG
- FUL
- ISH
- ▁STORE
- ▁LU
- ▁FLOOR
- ▁MOVING
- ▁LARGE
- ▁STOP
- ▁PH
- ▁WALK
- '5'
- ▁QU
- ▁TECHNIQUE
- ▁SOFT
- ▁GROUND
- ▁JUMP
- ▁JU
- ▁FILL
- ▁WHY
- ▁BUY
- ▁GREEN
- ▁WALL
- ▁HEEL
- NESS
- ▁LEVEL
- ▁UNDERNEATH
- ▁PATTERN
- ▁BEHIND
- ▁OLD
- ▁TIP
- ▁COMPLETE
- ▁WON
- ▁TEACH
- ▁FIT
- ▁NECK
- ▁REMOVE
- ▁TRICK
- ▁MOVEMENT
- ▁TOWARDS
- ▁PARTICULAR
- ▁CHI
- ▁EFFECT
- J
- ▁FREE
- ▁ACROSS
- ▁BEND
- ▁SAFE
- ▁SLIDE
- ▁PROBLEM
- ▁BLOCK
- ▁PAN
- ▁NATURAL
- ▁TOUCH
- ▁CHILD
- LINE
- ▁CROSS
- ▁REASON
- '4'
- ▁POWER
- ▁APPLY
- ▁FOLLOW
- ▁DESIGN
- ▁SPACE
- ▁ORDER
- ▁WOOD
- ▁RID
- '3'
- ▁COOK
- ▁BEGIN
- ▁WATCH
- ▁STYLE
- QUA
- ▁PRODUCT
- ▁TAKING
- ▁PUTTING
- ▁EXHALE
- ▁THOUGH
- ▁DEEP
- IAN
- ▁REACH
- ▁FOOD
- ▁ALMOST
- ▁COOL
- ▁SECTION
- ▁SAID
- ▁ANGLE
- ▁MUSIC
- ▁RELAX
- ▁CORNER
- ▁DARK
- ▁CHORD
- ▁ESPECIALLY
- ▁SCALE
- ▁WARM
- ▁WITHOUT
- ▁WHEEL
- ▁SEGMENT
- ▁TABLE
- ▁BOOK
- ▁PASS
- ▁ELBOW
- ▁ROUND
- ▁INHALE
- ▁SMOOTH
- ▁ROOM
- /
- ▁NINE
- ▁SHORT
- ▁MEASURE
- ▁LESS
- ▁TWIST
- ▁BALANCE
- ▁PROCESS
- ▁SWITCH
- ▁GENERAL
- ▁CLAY
- ▁CERTAIN
- ▁NEVER
- ▁BLUE
- ▁CUP
- ▁HOUSE
- ▁EXTRA
- ▁MOTION
- ▁PRESSURE
- ▁FIRE
- ▁SIMPLY
- ▁DOUBLE
- ▁TWENTY
- ▁CATCH
- ▁BECOME
- ▁BUILD
- ▁SPEED
- ▁TRANS
- ▁DRUM
- ▁CHEST
- ▁PICTURE
- ▁LENGTH
- ▁CONTINUE
- ▁COMFORTABLE
- ▁FISH
- ▁PHOTO
- ▁LOOSE
- ▁SKI
- ▁LIFE
- ▁DEGREE
- ▁OPTION
- ▁WORD
- ▁SHARP
- ▁SHOOT
- ▁FOUND
- ▁STRONG
- ▁QUITE
- ▁THIRD
- ▁GLUE
- ▁MIND
- ▁DEFINITELY
- ▁EASIER
- GRAPH
- ▁HOOK
- ▁CLEAR
- ▁POSE
- ▁BUTTON
- ▁CHOOSE
- ▁THICK
- ▁SYSTEM
- ▁PERFECT
- ▁BEAUTIFUL
- ▁SPOT
- ▁GROW
- ▁SIGN
- ▁ELSE
- ▁CONNECT
- ▁SELECT
- ▁PUNCH
- ▁DIRECTION
- ▁WRAP
- ▁RELEASE
- QUI
- SIDE
- ▁CAREFUL
- ▁VIDEO
- ▁INSTEAD
- ▁CIRCLE
- ▁WIRE
- ▁NOSE
- ▁AMOUNT
- ▁FOCUS
- ▁NORMAL
- ▁MAJOR
- ▁WHETHER
- ▁SURFACE
- ▁THUMB
- ▁DRIVE
- ▁SCREW
- ▁POSSIBLE
- ▁OBVIOUSLY
- ▁COMMON
- ▁REGULAR
- ▁ADJUST
- ▁WIDE
- ▁BLADE
- ▁FRET
- ▁RECOMMEND
- ▁BOWL
- BOARD
- ▁IMAGE
- ▁DEPENDING
- ▁PROTECT
- ▁CLOTH
- ▁HEALTH
- ▁WRIST
- ▁CLUB
- ▁DRINK
- ▁SINCE
- ▁FRIEND
- '00'
- ▁RUNNING
- ▁ITSELF
- ▁RECORD
- ▁SWING
- ▁DIRECT
- ▁MATERIAL
- ▁YO
- ▁LEAST
- ▁EXACTLY
- ▁BEGINNING
- ▁SLIGHTLY
- ▁TREAT
- ▁CAMERA
- ▁QUARTER
- ▁WINDOW
- '8'
- ▁SOMEBODY
- ▁BURN
- ▁DEMONSTRATE
- ▁DIFFERENCE
- ▁COMPUTER
- IBLE
- ▁SHOE
- ▁PERFORM
- ▁SQUARE
- ▁CONSIDER
- ▁DRILL
- ▁TEXT
- ▁FILE
- ▁RUB
- ▁FABRIC
- ▁HUNDRED
- ▁GRIP
- ▁CHARACTER
- ▁SPECIFIC
- ▁KNOT
- ▁CURL
- ▁STITCH
- ▁BLEND
- ▁FRAME
- ▁THIRTY
- '1'
- ▁HORSE
- ▁ATTACH
- ▁GROUP
- ▁STROKE
- ▁GUITAR
- ▁APART
- ▁MACHINE
- ▁CLASS
- ▁COMB
- ▁ROOT
- ▁HELLO
- ▁ENERGY
- ▁ATTACK
- ▁CORRECT
- ▁EXTEND
- ▁MINOR
- ▁PROFESSIONAL
- ▁MONEY
- ▁STRIP
- ▁FLAVOR
- ▁EVERYBODY
- ▁RULE
- ▁DIFFICULT
- ▁PROJECT
- ▁DISCUSS
- ▁FIGURE
- ▁HOWEVER
- ▁FINAL
- ▁STRENGTH
- ▁ENTIRE
- ▁FIELD
- ▁CONTACT
- ▁SUPPORT
- ▁PALM
- ▁SERIES
- ▁ENJOY
- '6'
- ▁WORLD
- ▁DECIDE
- ▁SPEAK
- ▁SEVERAL
- ▁WRITE
- ▁PROGRAM
- ABILITY
- ▁KNIFE
- ▁PLASTIC
- ▁ORGAN
- '7'
- ▁UNDERSTAND
- ▁FIFTEEN
- ▁FLEX
- ▁INFORMATION
- ▁TWELVE
- ▁DETAIL
- ▁STRIKE
- ▁ACTUAL
- ▁SPRAY
- ▁LOCAL
- ▁MOUTH
- ▁NIGHT
- ▁VEHICLE
- ▁OPPOSITE
- ▁SCHOOL
- '9'
- ▁QUESTION
- ▁SPECIAL
- ▁BIGGER
- ▁DEVELOP
- ▁PEPPER
- ▁PREFER
- Q
- '%'
- ']'
- '['
- '&'
- ','
- _
- '#'
- '='
- '@'
- +
- '*'
- $
- '~'
- <sos/eos>
init: null
input_size: null
ctc_conf:
ignore_nan_grad: true
model_conf:
ctc_weight: 0.0
lsm_weight: 0.15
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram1000/bpe.model
non_linguistic_symbols: data/nlsyms
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
hop_length: 256
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_vid_sum/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: abs_pos
selfattention_layer_type: lf_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
attention_windows:
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
attention_dilation:
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
attention_mode: tvm
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 512
num_blocks: 6
dropout_rate: 0.15
positional_dropout_rate: 0.15
self_attention_dropout_rate: 0.15
src_attention_dropout_rate: 0.15
required:
- output_dir
- token_list
version: 0.10.0
distributed: true
```
</details>
Please cite the following paper if you use this recipe:
```BibTex
@misc{sharma2022speech,
title={Speech Summarization using Restricted Self-Attention},
author={Roshan Sharma and Shruti Palaskar and Alan W Black and Florian Metze},
year={2022},
eprint={2110.06263},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title##3={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass{cs.CL}
```
|
AI-Nordics/bert-large-swedish-cased
|
AI-Nordics
| 2022-02-15T16:52:53Z | 162 | 11 |
transformers
|
[
"transformers",
"pytorch",
"megatron-bert",
"fill-mask",
"sv",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language: sv
---
# A Swedish Bert model
## Model description
This model follows the Bert Large model architecture as implemented in [Megatron-LM framework](https://github.com/NVIDIA/Megatron-LM). It was trained with a batch size of 512 in 600k steps. The model contains following parameters:
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 340M |
| \\(n_{layers}\\) | 24 |
| \\(n_{heads}\\) | 16 |
| \\(n_{ctx}\\) | 1024 |
| \\(n_{vocab}\\) | 30592 |
## Training data
The model is pretrained on a Swedish text corpus of around 85 GB from a variety of sources as shown below.
<figure>
| Dataset | Genre | Size(GB)|
|----------------------|------|------|
| Anföranden | Politics |0.9|
|DCEP|Politics|0.6|
|DGT|Politics|0.7|
|Fass|Medical|0.6|
|Författningar|Legal|0.1|
|Web data|Misc|45.0|
|JRC|Legal|0.4|
|Litteraturbanken|Books|0.3O|
|SCAR|Misc|28.0|
|SOU|Politics|5.3|
|Subtitles|Drama|1.3|
|Wikipedia|Facts|1.8|
## Intended uses & limitations
The raw model can be used for the usual tasks of masked language modeling or next sentence prediction. It is also often fine-tuned on a downstream task to improve its performance in a specific domain/task.
<br>
<br>
## How to use
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("AI-Nordics/bert-large-swedish-cased")
model = AutoModelForMaskedLM.from_pretrained("AI-Nordics/bert-large-swedish-cased")
|
Xibanya/sunset_city
|
Xibanya
| 2022-02-15T16:31:37Z | 0 | 3 | null |
[
"PyTorch",
"Transformers",
"text-to-image",
"ru",
"en",
"license:cc-by-sa-4.0",
"region:us"
] |
text-to-image
| 2022-03-02T23:29:05Z |
---
license: cc-by-sa-4.0
language:
- ru
- en
pipeline_tag: text-to-image
tags:
- PyTorch
- Transformers
---
# Sunset Cities
This is the [Malevich](https://huggingface.co/sberbank-ai/rudalle-Malevich) ruDALL-E model finetuned on anime screenshots of big cities at sunset.
<img style="text-align:center; display:block;" src="https://huggingface.co/Xibanya/sunset_city/resolve/main/citysunset.png" width="256">
### installation
```
pip install rudalle
```
### How to use
Basic implementation to get a list of image data objects.
```python
from translate import Translator
from rudalle import get_rudalle_model, get_tokenizer, get_vae
from rudalle.pipelines import generate_images
model = get_rudalle_model('Malevich', pretrained=True, fp16=True, device='cuda')
model.load_state_dict(torch.load(CHECKPOINT_PATH))
vae = get_vae().to('cuda')
tokenizer = get_tokenizer()
input_text = Translator(to_lang='ru').translate('city at sunset')
images, _ = generate_images(
text=input_text,
tokenizer=tokenizer, dalle=model, vae=vae,
images_num=1,
top_k=2048,
top_p=0.95,
temperature=1.0
)
```
the Malevich model only recognizes input in Russian. If you're going to paste Cyrillic directly into the code rather than filter an English prompt through the translate API, you will need to put this at the top of the file:
```python
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
```
|
GleamEyeBeast/Mandarin_naive
|
GleamEyeBeast
| 2022-02-15T13:44:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: Mandarin_naive
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mandarin_naive
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4584
- Wer: 0.3999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8963 | 3.67 | 400 | 1.0645 | 0.8783 |
| 0.5506 | 7.34 | 800 | 0.5032 | 0.5389 |
| 0.2111 | 11.01 | 1200 | 0.4765 | 0.4712 |
| 0.1336 | 14.68 | 1600 | 0.4815 | 0.4511 |
| 0.0974 | 18.35 | 2000 | 0.4956 | 0.4370 |
| 0.0748 | 22.02 | 2400 | 0.4881 | 0.4235 |
| 0.0584 | 25.69 | 2800 | 0.4732 | 0.4193 |
| 0.0458 | 29.36 | 3200 | 0.4584 | 0.3999 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
joe5campbell/BERT_Tweet_Sentiment_10k
|
joe5campbell
| 2022-02-15T12:42:41Z | 9 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: BERT_Tweet_Sentiment_10k
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_10k
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3891
- Train Accuracy: 0.8273
- Validation Loss: 0.4749
- Validation Accuracy: 0.8073
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3891 | 0.8273 | 0.4749 | 0.8073 | 0 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
CLAck/vi-en
|
CLAck
| 2022-02-15T11:33:16Z | 47 | 1 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"vi",
"dataset:ALT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- vi
tags:
- translation
license: apache-2.0
datasets:
- ALT
metrics:
- sacrebleu
---
This is a finetuning of a MarianMT pretrained on Chinese-English. The target language pair is Vietnamese-English.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/vi-en")
tokenizer = AutoTokenizer.from_pretrained("CLAck/vi-en")
sentence = your_vietnamese_sentence
# This token is needed to identify the source language
input_sentence = "<2vi> " + sentence
translated = model.generate(**tokenizer(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 21.3180 |
| 2.0 | 26.8012 |
| 3.0 | 29.3578 |
| 4.0 | 31.5178 |
| 5.0 | 32.8740 |
|
CLAck/en-km
|
CLAck
| 2022-02-15T11:26:53Z | 39 | 3 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
---
This model translate from English to Khmer.
It is the pure fine-tuned version of MarianMT model en-zh.
This is the result after 30 epochs of pure fine-tuning of khmer language.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/en-km")
tokenizer = AutoTokenizer.from_pretrained("CLAck/en-km")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2khm>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2khm> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
|
msintaha/bert-base-uncased-finetuned-copa-data-new
|
msintaha
| 2022-02-15T08:41:46Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-copa-data-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-copa-data-new
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5995
- Accuracy: 0.7000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.6564 | 0.6600 |
| No log | 2.0 | 50 | 0.5995 | 0.7000 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
hyerim/distilbert-base-uncased-finetuned-ner
|
hyerim
| 2022-02-15T08:37:29Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9273570324574961
- name: Recall
type: recall
value: 0.9397024275646045
- name: F1
type: f1
value: 0.9334889148191365
- name: Accuracy
type: accuracy
value: 0.9837641190207635
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Precision: 0.9274
- Recall: 0.9397
- F1: 0.9335
- Accuracy: 0.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2403 | 1.0 | 878 | 0.0714 | 0.9171 | 0.9216 | 0.9193 | 0.9805 |
| 0.0555 | 2.0 | 1756 | 0.0604 | 0.9206 | 0.9347 | 0.9276 | 0.9829 |
| 0.031 | 3.0 | 2634 | 0.0617 | 0.9274 | 0.9397 | 0.9335 | 0.9838 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.7.1
- Datasets 1.18.3
- Tokenizers 0.10.1
|
Rafat/wav2vec2-base-timit-demo-colab
|
Rafat
| 2022-02-15T01:18:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4229
- Wer: 0.2386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5486 | 4.0 | 500 | 2.1672 | 0.9876 |
| 0.6819 | 8.0 | 1000 | 0.4502 | 0.3301 |
| 0.2353 | 12.0 | 1500 | 0.4352 | 0.2841 |
| 0.1427 | 16.0 | 2000 | 0.4237 | 0.2584 |
| 0.0945 | 20.0 | 2500 | 0.4409 | 0.2545 |
| 0.0671 | 24.0 | 3000 | 0.4257 | 0.2413 |
| 0.0492 | 28.0 | 3500 | 0.4229 | 0.2386 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hark99/distilbert-base-uncased-finetuned-squad
|
hark99
| 2022-02-14T23:05:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2251 | 1.0 | 5533 | 1.1707 |
| 0.9554 | 2.0 | 11066 | 1.1211 |
| 0.7645 | 3.0 | 16599 | 1.1642 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_50_Epochs
|
jfarray
| 2022-02-14T21:41:05Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
NicoGrageda/wav2vec2-base-timit-demo-colab
|
NicoGrageda
| 2022-02-14T21:18:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4519
- Wer: 0.3375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4351 | 4.0 | 500 | 1.2740 | 0.8259 |
| 0.5828 | 8.0 | 1000 | 0.4276 | 0.4403 |
| 0.2274 | 12.0 | 1500 | 0.4646 | 0.3739 |
| 0.135 | 16.0 | 2000 | 0.4320 | 0.3662 |
| 0.0962 | 20.0 | 2500 | 0.4831 | 0.3607 |
| 0.0719 | 24.0 | 3000 | 0.4506 | 0.3463 |
| 0.0556 | 28.0 | 3500 | 0.4519 | 0.3375 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_10_Epochs
|
jfarray
| 2022-02-14T21:06:23Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_bert-base-multilingual-uncased_100_Epochs
|
jfarray
| 2022-02-14T20:23:54Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
huggingtweets/magicrealismbot
|
huggingtweets
| 2022-02-14T18:15:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/668872745329885184/67TNOs2A_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Magic Realism Bot</div>
<div style="text-align: center; font-size: 14px;">@magicrealismbot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Magic Realism Bot.
| Data | Magic Realism Bot |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 0 |
| Tweets kept | 3250 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nx0qvg7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @magicrealismbot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/9vq0074d) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/9vq0074d/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/magicrealismbot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-small-sh
|
NewT5SharedHeadsSharedKeyValues
| 2022-02-14T16:23:08Z | 6 | 0 |
transformers
|
[
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- t5-new-failed
---
# Test
Hf T5: -146.39734268188477
MTF T5: -72.12132263183594
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-xl-sh
|
NewT5SharedHeadsSharedKeyValues
| 2022-02-14T16:23:01Z | 8 | 0 |
transformers
|
[
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- t5-new-failed
---
# Test
Hf T5: -118.6875057220459
MTF T5: -76.85459899902344
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-base-sh
|
NewT5SharedHeadsSharedKeyValues
| 2022-02-14T16:22:41Z | 4 | 0 |
transformers
|
[
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- t5-new-failed
---
# Test
Hf T5: -95.86687088012695
MTF T5: -67.8558578491211
|
vblagoje/dpr-ctx_encoder-single-lfqa-wiki
|
vblagoje
| 2022-02-14T15:51:28Z | 4,105 | 3 |
transformers
|
[
"transformers",
"pytorch",
"dpr",
"en",
"dataset:vblagoje/lfqa",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
datasets:
- vblagoje/lfqa
license: mit
---
## Introduction
The context/passage encoder model based on [DPRContextEncoder](https://huggingface.co/docs/transformers/master/en/model_doc/dpr#transformers.DPRContextEncoder) architecture. It uses the transformer's pooler outputs as context/passage representations. See [blog post](https://towardsdatascience.com/long-form-qa-beyond-eli5-an-updated-dataset-and-approach-319cb841aabb) for more details.
## Training
We trained vblagoje/dpr-ctx_encoder-single-lfqa-wiki using FAIR's dpr-scale in two stages. In the first stage, we used PAQ based pretrained checkpoint and fine-tuned the retriever on the question-answer pairs from the LFQA dataset. As dpr-scale requires DPR formatted training set input with positive, negative, and hard negative samples - we created a training file with an answer being positive, negatives being question unrelated answers, while hard negative samples were chosen from answers on questions between 0.55 and 0.65 of cosine similarity. In the second stage, we created a new DPR training set using positives, negatives, and hard negatives from the Wikipedia/Faiss index created in the first stage instead of LFQA dataset answers. More precisely, for each dataset question, we queried the first stage Wikipedia Faiss index and subsequently used SBert cross-encoder to score questions/answers (passage) pairs with topk=50. The cross-encoder selected the positive passage with the highest score, while the bottom seven answers were selected for hard-negatives. Negative samples were again chosen to be answers unrelated to a given dataset question. After creating a DPR formatted training file with Wikipedia sourced positive, negative, and hard negative passages, we trained DPR-based question/passage encoders using dpr-scale.
## Performance
LFQA DPR-based retriever (vblagoje/dpr-question_encoder-single-lfqa-wiki and vblagoje/dpr-ctx_encoder-single-lfqa-wiki) slightly underperform 'state-of-the-art' Krishna et al. "Hurdles to Progress in Long-form Question Answering" REALM based retriever with KILT benchmark performance of 11.2 for R-precision and 19.5 for Recall@5.
## Usage
```python
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
tokenizer = DPRContextEncoderTokenizer.from_pretrained("vblagoje/dpr-ctx_encoder-single-lfqa-wiki")
model = DPRContextEncoder.from_pretrained("vblagoje/dpr-ctx_encoder-single-lfqa-wiki")
input_ids = tokenizer("Where an aircraft passes through a cloud, it can disperse the cloud in its path...", return_tensors="pt")["input_ids"]
embeddings = model(input_ids).pooler_output
```
## Author
- Vladimir Blagojevic: `dovlex [at] gmail.com` [Twitter](https://twitter.com/vladblagoje) | [LinkedIn](https://www.linkedin.com/in/blagojevicvladimir/)
|
huggingtweets/dojacat
|
huggingtweets
| 2022-02-14T15:30:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/dojacat/1644852645931/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1487993727918374915/aN2YUrbc_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jean-Emmanuel De La Martinière</div>
<div style="text-align: center; font-size: 14px;">@dojacat</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jean-Emmanuel De La Martinière.
| Data | Jean-Emmanuel De La Martinière |
| --- | --- |
| Tweets downloaded | 1569 |
| Retweets | 124 |
| Short tweets | 322 |
| Tweets kept | 1123 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3mc5ryte/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dojacat's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3urxj6el) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3urxj6el/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dojacat')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
groar/gpt-neo-1.3B-finetuned-escape3
|
groar
| 2022-02-14T15:17:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-1.3B-finetuned-escape3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-1.3B-finetuned-escape3
This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
reach-vb/wav2vec2-large-xls-r-1B-common_voice7-lt-ft
|
reach-vb
| 2022-02-14T13:39:07Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-1B-common_voice7-lt-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1B-common_voice7-lt-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5101
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 36
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 900
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 2.3491 | 31.24 | 500 | 3.9827 | 1.0 |
| 0.0421 | 62.48 | 1000 | 2.9544 | 1.0 |
| 0.0163 | 93.73 | 1500 | 2.5101 | 1.0 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
|
hrdipto/wav2vec2-xls-r-300m-bangla-command-generated-data-finetune
|
hrdipto
| 2022-02-14T08:58:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-300m-bangla-command-generated-data-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-bangla-command-generated-data-finetune
This model is a fine-tuned version of [hrdipto/wav2vec2-xls-r-300m-bangla-command-data](https://huggingface.co/hrdipto/wav2vec2-xls-r-300m-bangla-command-data) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0099
- eval_wer: 0.0208
- eval_runtime: 2.5526
- eval_samples_per_second: 75.217
- eval_steps_per_second: 9.402
- epoch: 71.43
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingartists/bill-wurtz
|
huggingartists
| 2022-02-14T08:56:26Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/bill-wurtz",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/bill-wurtz
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/0d4b35ed37091d5f6fd59806810e14ca.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bill Wurtz</div>
<a href="https://genius.com/artists/bill-wurtz">
<div style="text-align: center; font-size: 14px;">@bill-wurtz</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Bill Wurtz.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/bill-wurtz).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/bill-wurtz")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/27ysbe74/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Bill Wurtz's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2f8oa51l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2f8oa51l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/bill-wurtz')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/bill-wurtz")
model = AutoModelWithLMHead.from_pretrained("huggingartists/bill-wurtz")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc
|
ASCCCCCCCC
| 2022-02-14T08:54:32Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model_index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.9.0
- Pytorch 1.7.1+cpu
- Datasets 1.17.0
- Tokenizers 0.10.3
|
jatinshah/marian-finetuned-kde4-en-to-fr
|
jatinshah
| 2022-02-14T05:47:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8815
- Score: 52.2204
- Counts: [166010, 120787, 91973, 70929]
- Totals: [228361, 207343, 189354, 173335]
- Precisions: [72.69630103213771, 58.254679444205976, 48.57198686058916, 40.92018345977443]
- Bp: 0.9695
- Sys Len: 228361
- Ref Len: 235434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0a0+0aef44c
- Datasets 1.18.3
- Tokenizers 0.11.0
|
fastai/fastbook_06_multicat_Biwi_Kinect_Head_Pose
|
fastai
| 2022-02-14T05:21:20Z | 6 | 2 |
fastai
|
[
"fastai",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- fastai
---
# Amazing!
Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (template below and [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using the 🤗Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join our fastai community on the Hugging Face Discord!
Greetings fellow fastlearner 🤝!
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
jfarray/Model_bert-base-multilingual-uncased_30_Epochs
|
jfarray
| 2022-02-13T23:54:47Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_bert-base-multilingual-uncased_10_Epochs
|
jfarray
| 2022-02-13T23:21:43Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_all-distilroberta-v1_100_Epochs
|
jfarray
| 2022-02-13T20:50:24Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_all-distilroberta-v1_30_Epochs
|
jfarray
| 2022-02-13T20:00:26Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_all-distilroberta-v1_10_Epochs
|
jfarray
| 2022-02-13T19:47:38Z | 10 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_all-distilroberta-v1_5_Epochs
|
jfarray
| 2022-02-13T19:40:19Z | 10 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
turing1729/gpt-neo-1.3B-news
|
turing1729
| 2022-02-13T10:21:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
Fine-tuned on short news articles for summarization with GPT-neo 1.3B parameters
|
srosy/distilbert-base-uncased-finetuned-emotion
|
srosy
| 2022-02-13T09:39:07Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.939
- name: F1
type: f1
value: 0.9391566069722169
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1582
- Accuracy: 0.939
- F1: 0.9392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4977 | 1.0 | 1000 | 0.1919 | 0.9255 | 0.9253 |
| 0.1545 | 2.0 | 2000 | 0.1582 | 0.939 | 0.9392 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.8.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
mujeensung/albert-base-v2_mnli_bc
|
mujeensung
| 2022-02-13T05:23:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: albert-base-v2_mnli_bc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9398776667163956
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2_mnli_bc
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2952
- Accuracy: 0.9399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2159 | 1.0 | 16363 | 0.2268 | 0.9248 |
| 0.1817 | 2.0 | 32726 | 0.2335 | 0.9347 |
| 0.0863 | 3.0 | 49089 | 0.3014 | 0.9401 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
mujeensung/roberta-base_mnli_bc
|
mujeensung
| 2022-02-13T05:13:00Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: roberta-base_mnli_bc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9583768461882739
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_mnli_bc
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2125
- Accuracy: 0.9584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2015 | 1.0 | 16363 | 0.1820 | 0.9470 |
| 0.1463 | 2.0 | 32726 | 0.1909 | 0.9559 |
| 0.0768 | 3.0 | 49089 | 0.2117 | 0.9585 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_100_Epochs
|
jfarray
| 2022-02-13T00:33:38Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_10_Epochs
|
jfarray
| 2022-02-12T22:32:17Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_1_Epochs
|
jfarray
| 2022-02-12T21:48:20Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_100_Epochs
|
jfarray
| 2022-02-12T21:38:44Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_50_Epochs
|
jfarray
| 2022-02-12T21:16:09Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_30_Epochs
|
jfarray
| 2022-02-12T21:00:41Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_5_Epochs
|
jfarray
| 2022-02-12T20:37:59Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_distiluse-base-multilingual-cased-v1_100_Epochs
|
jfarray
| 2022-02-12T19:45:48Z | 137 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_distiluse-base-multilingual-cased-v1_30_Epochs
|
jfarray
| 2022-02-12T14:08:36Z | 142 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ArBert/roberta-base-finetuned-ner-kmeans-twitter
|
ArBert
| 2022-02-12T12:53:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: roberta-base-finetuned-ner-kmeans-twitter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner-kmeans-twitter
This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6645
- Precision: 0.6885
- Recall: 0.7665
- F1: 0.7254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 245 | 0.2820 | 0.6027 | 0.7543 | 0.6700 |
| No log | 2.0 | 490 | 0.2744 | 0.6308 | 0.7864 | 0.7000 |
| 0.2301 | 3.0 | 735 | 0.2788 | 0.6433 | 0.7637 | 0.6984 |
| 0.2301 | 4.0 | 980 | 0.3255 | 0.6834 | 0.7221 | 0.7022 |
| 0.1153 | 5.0 | 1225 | 0.3453 | 0.6686 | 0.7439 | 0.7043 |
| 0.1153 | 6.0 | 1470 | 0.3988 | 0.6797 | 0.7420 | 0.7094 |
| 0.0617 | 7.0 | 1715 | 0.4711 | 0.6702 | 0.7259 | 0.6969 |
| 0.0617 | 8.0 | 1960 | 0.4904 | 0.6904 | 0.7505 | 0.7192 |
| 0.0328 | 9.0 | 2205 | 0.5088 | 0.6591 | 0.7713 | 0.7108 |
| 0.0328 | 10.0 | 2450 | 0.5709 | 0.6468 | 0.7788 | 0.7067 |
| 0.019 | 11.0 | 2695 | 0.5570 | 0.6642 | 0.7533 | 0.7059 |
| 0.019 | 12.0 | 2940 | 0.5574 | 0.6899 | 0.7656 | 0.7258 |
| 0.0131 | 13.0 | 3185 | 0.5858 | 0.6952 | 0.7609 | 0.7265 |
| 0.0131 | 14.0 | 3430 | 0.6239 | 0.6556 | 0.7826 | 0.7135 |
| 0.0074 | 15.0 | 3675 | 0.5931 | 0.6825 | 0.7599 | 0.7191 |
| 0.0074 | 16.0 | 3920 | 0.6364 | 0.6785 | 0.7580 | 0.7161 |
| 0.005 | 17.0 | 4165 | 0.6437 | 0.6855 | 0.7580 | 0.7199 |
| 0.005 | 18.0 | 4410 | 0.6610 | 0.6779 | 0.7599 | 0.7166 |
| 0.0029 | 19.0 | 4655 | 0.6625 | 0.6853 | 0.7656 | 0.7232 |
| 0.0029 | 20.0 | 4900 | 0.6645 | 0.6885 | 0.7665 | 0.7254 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ArBert/roberta-base-finetuned-ner-agglo-twitter
|
ArBert
| 2022-02-12T11:40:08Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: roberta-base-finetuned-ner-agglo-twitter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner-agglo-twitter
This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6645
- Precision: 0.6885
- Recall: 0.7665
- F1: 0.7254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 245 | 0.2820 | 0.6027 | 0.7543 | 0.6700 |
| No log | 2.0 | 490 | 0.2744 | 0.6308 | 0.7864 | 0.7000 |
| 0.2301 | 3.0 | 735 | 0.2788 | 0.6433 | 0.7637 | 0.6984 |
| 0.2301 | 4.0 | 980 | 0.3255 | 0.6834 | 0.7221 | 0.7022 |
| 0.1153 | 5.0 | 1225 | 0.3453 | 0.6686 | 0.7439 | 0.7043 |
| 0.1153 | 6.0 | 1470 | 0.3988 | 0.6797 | 0.7420 | 0.7094 |
| 0.0617 | 7.0 | 1715 | 0.4711 | 0.6702 | 0.7259 | 0.6969 |
| 0.0617 | 8.0 | 1960 | 0.4904 | 0.6904 | 0.7505 | 0.7192 |
| 0.0328 | 9.0 | 2205 | 0.5088 | 0.6591 | 0.7713 | 0.7108 |
| 0.0328 | 10.0 | 2450 | 0.5709 | 0.6468 | 0.7788 | 0.7067 |
| 0.019 | 11.0 | 2695 | 0.5570 | 0.6642 | 0.7533 | 0.7059 |
| 0.019 | 12.0 | 2940 | 0.5574 | 0.6899 | 0.7656 | 0.7258 |
| 0.0131 | 13.0 | 3185 | 0.5858 | 0.6952 | 0.7609 | 0.7265 |
| 0.0131 | 14.0 | 3430 | 0.6239 | 0.6556 | 0.7826 | 0.7135 |
| 0.0074 | 15.0 | 3675 | 0.5931 | 0.6825 | 0.7599 | 0.7191 |
| 0.0074 | 16.0 | 3920 | 0.6364 | 0.6785 | 0.7580 | 0.7161 |
| 0.005 | 17.0 | 4165 | 0.6437 | 0.6855 | 0.7580 | 0.7199 |
| 0.005 | 18.0 | 4410 | 0.6610 | 0.6779 | 0.7599 | 0.7166 |
| 0.0029 | 19.0 | 4655 | 0.6625 | 0.6853 | 0.7656 | 0.7232 |
| 0.0029 | 20.0 | 4900 | 0.6645 | 0.6885 | 0.7665 | 0.7254 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sylviachency/distilbert-base-uncased-finetuned-cola
|
sylviachency
| 2022-02-12T06:48:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5235221651747541
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9155
- Matthews Correlation: 0.5235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5275 | 1.0 | 535 | 0.5174 | 0.4181 |
| 0.3496 | 2.0 | 1070 | 0.5617 | 0.4857 |
| 0.2359 | 3.0 | 1605 | 0.6661 | 0.5029 |
| 0.1701 | 4.0 | 2140 | 0.8052 | 0.5091 |
| 0.1266 | 5.0 | 2675 | 0.9155 | 0.5235 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/multi-qa-distilbert-base-uncased
|
jgammack
| 2022-02-11T23:40:41Z | 141 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jgammack/multi-qa-distilbert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-distilbert-base-uncased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-distilbert-base-uncased')
model = AutoModel.from_pretrained('jgammack/multi-qa-distilbert-base-uncased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-distilbert-base-uncased)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jimypbr/bert-base-uncased-squad
|
jimypbr
| 2022-02-11T22:28:31Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
# BERT-Base Uncased SQuADv1
`bert-base-uncased` trained on question answering with `squad`.
Evalulation scores:
```
***** eval metrics *****
epoch = 3.0
eval_exact_match = 80.6906
eval_f1 = 88.1129
eval_samples = 10784
```
|
huggingtweets/sauce__world
|
huggingtweets
| 2022-02-11T22:14:53Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/sauce__world/1644617665459/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1488960307305218049/nAFuBERK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">poolboy sauce world</div>
<div style="text-align: center; font-size: 14px;">@sauce__world</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from poolboy sauce world.
| Data | poolboy sauce world |
| --- | --- |
| Tweets downloaded | 3192 |
| Retweets | 323 |
| Short tweets | 513 |
| Tweets kept | 2356 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20dtxww4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sauce__world's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/vh9fgsnx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/vh9fgsnx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sauce__world')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ibombonato/swin-age-classifier
|
ibombonato
| 2022-02-11T21:42:47Z | 272 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: swin-age-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8174999952316284
---
# swin-age-classifier
Trained on 80 epochs -
Data from: Ai Crowd - Blitz
ai-blitz-xiii - Age Prediction
https://www.aicrowd.com/challenges/ai-blitz-xiii/problems/age-prediction/
Notebook based on HuggingPics
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
|
BigSalmon/InformalToFormalLincoln21
|
BigSalmon
| 2022-02-11T21:24:42Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
Informal to Formal:
Wordy to Concise:
Fill Missing Phrase:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln21")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln21")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
````
```
infill: increasing the number of sidewalks in suburban areas will [MASK].
Translated into the Style of Abraham Lincoln: increasing the number of sidewalks in suburban areas will ( ( enhance / maximize ) community cohesion / facilitate ( communal ties / the formation of neighborhood camaraderie ) / forge neighborly relations / lend themselves to the advancement of neighborly ties / plant the seeds of community building / flower anew the bonds of friendship / invite the budding of neighborhood rapport / enrich neighborhood life ).
infill: corn fields [MASK], [MASK] visibly as one ventures beyond chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), ( manifesting themselves ) visibly as one ventures beyond chicago.
infill: the [MASK] the SAT will soon be [MASK]. [MASK] an examination undertaken on one's laptop. [MASK] will allow students to retrieve test results promptly.
Translated into the Style of Abraham Lincoln: the ( conventional form of ) the SAT will soon be ( consigned to history ). ( replacing it will be ) an examination undertaken on one's laptop. ( so doing ) will allow students to retrieve test results promptly.
infill:
```
```
***
wordy: chancing upon a linux user is a rare occurrence in the present day.
Translate into Concise Text: present-day linux users are rare.
***
wordy: an interest in classical music is becoming more and more less popular.
Translate into Concise Text: classical music appreciation is dwindling.
Translate into Concise Text: waning interest in classic music persists.
Translate into Concise Text: interest in classic music is fading.
***
wordy: the ice cream was only one dollar, but it was not a good value for the size.
Translate into Concise Text: the one dollar ice cream was overpriced for its size.
Translate into Concise Text: overpriced, the one dollar ice cream was small.
***
wordy:
```
|
microsoft/codebert-base
|
microsoft
| 2022-02-11T19:59:44Z | 574,944 | 236 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"roberta",
"feature-extraction",
"arxiv:2002.08155",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
## CodeBERT-base
Pretrained weights for [CodeBERT: A Pre-Trained Model for Programming and Natural Languages](https://arxiv.org/abs/2002.08155).
### Training Data
The model is trained on bi-modal data (documents & code) of [CodeSearchNet](https://github.com/github/CodeSearchNet)
### Training Objective
This model is initialized with Roberta-base and trained with MLM+RTD objective (cf. the paper).
### Usage
Please see [the official repository](https://github.com/microsoft/CodeBERT) for scripts that support "code search" and "code-to-document generation".
### Reference
1. [CodeBERT trained with Masked LM objective](https://huggingface.co/microsoft/codebert-base-mlm) (suitable for code completion)
2. 🤗 [Hugging Face's CodeBERTa](https://huggingface.co/huggingface/CodeBERTa-small-v1) (small size, 6 layers)
### Citation
```bibtex
@misc{feng2020codebert,
title={CodeBERT: A Pre-Trained Model for Programming and Natural Languages},
author={Zhangyin Feng and Daya Guo and Duyu Tang and Nan Duan and Xiaocheng Feng and Ming Gong and Linjun Shou and Bing Qin and Ting Liu and Daxin Jiang and Ming Zhou},
year={2020},
eprint={2002.08155},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
huggingtweets/ezeojeda_97
|
huggingtweets
| 2022-02-11T18:26:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/ezeojeda_97/1644604009323/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1491399079779352581/L0_MeHf1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Easy</div>
<div style="text-align: center; font-size: 14px;">@ezeojeda_97</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Easy.
| Data | Easy |
| --- | --- |
| Tweets downloaded | 348 |
| Retweets | 25 |
| Short tweets | 58 |
| Tweets kept | 265 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2mcrv516/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ezeojeda_97's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/12ymakai) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/12ymakai/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ezeojeda_97')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AKulk/wav2vec2-base-timit-epochs5
|
AKulk
| 2022-02-11T16:48:06Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-epochs5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-epochs5
This model is a fine-tuned version of [facebook/wav2vec2-lv-60-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-lv-60-espeak-cv-ft) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ArBert/bert-base-uncased-finetuned-ner-kmeans
|
ArBert
| 2022-02-11T16:45:09Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased-finetuned-ner-kmeans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner-kmeans
This model is a fine-tuned version of [ArBert/bert-base-uncased-finetuned-ner](https://huggingface.co/ArBert/bert-base-uncased-finetuned-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1169
- Precision: 0.9084
- Recall: 0.9245
- F1: 0.9164
- Accuracy: 0.9792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.036 | 1.0 | 1123 | 0.1010 | 0.9086 | 0.9117 | 0.9101 | 0.9779 |
| 0.0214 | 2.0 | 2246 | 0.1094 | 0.9033 | 0.9199 | 0.9115 | 0.9784 |
| 0.014 | 3.0 | 3369 | 0.1169 | 0.9084 | 0.9245 | 0.9164 | 0.9792 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/distilbert-base-mean-pooling
|
jgammack
| 2022-02-11T15:49:11Z | 143 | 5 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jgammack/distilbert-base-mean-pooling
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/distilbert-base-mean-pooling')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/distilbert-base-mean-pooling')
model = AutoModel.from_pretrained('jgammack/distilbert-base-mean-pooling')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/distilbert-base-mean-pooling)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
sshasnain/wav2vec2-xls-r-300m-bangla-command-word-combination-synthetic
|
sshasnain
| 2022-02-11T13:25:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-300m-bangla-command-word-combination-synthetic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-bangla-command-word-combination-synthetic
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0068
- Wer: 0.4111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2982 | 17.86 | 500 | 2.4580 | 1.1089 |
| 0.9644 | 35.71 | 1000 | 0.1250 | 0.5156 |
| 0.1767 | 53.57 | 1500 | 0.0310 | 0.4267 |
| 0.0912 | 71.43 | 2000 | 0.0149 | 0.4178 |
| 0.0505 | 89.29 | 2500 | 0.0068 | 0.4111 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sshasnain/wav2vec2-xls-r-300m-bangla-command
|
sshasnain
| 2022-02-11T13:10:44Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bn",
"audio",
"speech",
"dataset:custom",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: Bengali
datasets:
- custom
metrics:
- wer
tags:
- bn
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: wav2vec2-xls-r-300m-bangla-command
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: custom
type: custom
args: ben
metrics:
- name: Test WER
type: wer
value: 0.006
---
# wav2vec2-xls-r-300m-bangla-command
***
## Usage
Commands
'৫ টা কলম দেন'
'চেয়ারটা কোথায় রেখেছেন'
'ডানের বালতিটার প্রাইজ কেমন'
'দশ কেজি আলু কত'
'বাজুসের ল্যাপটপটা এসেছে'
'বাসার জন্য দরজা আছে'
'ম্যাম মোবাইলটা কি আছে'
'হ্যালো শ্যাম্পুর দাম বল'
|
edbeeching/test-trainer-to-hub
|
edbeeching
| 2022-02-11T10:36:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: test-trainer-to-hub
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8455882352941176
- name: F1
type: f1
value: 0.893760539629005
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer-to-hub
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7352
- Accuracy: 0.8456
- F1: 0.8938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.4489 | 0.8235 | 0.8792 |
| 0.5651 | 2.0 | 918 | 0.4885 | 0.8260 | 0.8811 |
| 0.3525 | 3.0 | 1377 | 0.7352 | 0.8456 | 0.8938 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw
|
espnet
| 2022-02-11T06:24:00Z | 67 | 1 |
espnet
|
[
"espnet",
"audio",
"audio-to-audio",
"dataset:chime4",
"arxiv:1804.00015",
"arxiv:2011.03706",
"license:cc-by-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- audio-to-audio
language:
datasets:
- chime4
license: cc-by-4.0
---
## ESPnet2 ENH model
### `espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw`
This model was trained by Wangyou Zhang using chime4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd egs2/chime4/enh1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw
```
## ENH config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_beamformer_mvdr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/enh_train_enh_beamformer_mvdr_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 35841
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 70
patience: 4
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- si_snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
unused_parameters: false
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
pretrain_path: null
init_param: []
freeze_param: []
num_iters_per_epoch: null
batch_size: 8
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_16k/train/speech_mix_shape
- exp/enh_stats_16k/train/speech_ref1_shape
- exp/enh_stats_16k/train/noise_ref1_shape
valid_shape_file:
- exp/enh_stats_16k/valid/speech_mix_shape
- exp/enh_stats_16k/valid/speech_ref1_shape
- exp/enh_stats_16k/valid/noise_ref1_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/tr05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
- - dump/raw/tr05_simu_isolated_6ch_track/noise1.scp
- noise_ref1
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dt05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/dt05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
- - dump/raw/dt05_simu_isolated_6ch_track/noise1.scp
- noise_ref1
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-08
weight_decay: 0
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.5
patience: 1
init: xavier_uniform
model_conf:
loss_type: mask_mse
mask_type: PSM^2
use_preprocessor: false
encoder: stft
encoder_conf:
n_fft: 512
hop_length: 128
separator: wpe_beamformer
separator_conf:
num_spk: 1
loss_type: mask_mse
use_wpe: false
wnet_type: blstmp
wlayers: 3
wunits: 300
wprojs: 320
wdropout_rate: 0.0
taps: 5
delay: 3
use_dnn_mask_for_wpe: true
use_beamformer: true
bnet_type: blstmp
blayers: 3
bunits: 512
bprojs: 512
badim: 320
ref_channel: 3
use_noise_mask: true
beamformer_type: mvdr_souden
bdropout_rate: 0.0
decoder: stft
decoder_conf:
n_fft: 512
hop_length: 128
required:
- output_dir
version: 0.9.7
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{li2021espnetse,
title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji},
booktitle={Proc. IEEE Spoken Language Technology Workshop (SLT)},
pages={785--792},
year={2021},
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{li2021espnetse,
title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji},
year={2020},
eprint={2011.03706},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
|
infinitejoy/wav2vec2-large-xls-r-300m-indonesian
|
infinitejoy
| 2022-02-11T05:56:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"id",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- id
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-indonesian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-indonesian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - ID dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2759
- Wer: 0.3256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0387 | 4.72 | 1000 | 3.0892 | 1.0 |
| 1.7911 | 9.43 | 2000 | 0.8451 | 0.6702 |
| 1.2826 | 14.15 | 3000 | 0.4211 | 0.4166 |
| 1.1802 | 18.87 | 4000 | 0.3508 | 0.4690 |
| 1.1065 | 23.58 | 5000 | 0.3319 | 0.4662 |
| 1.0921 | 28.3 | 6000 | 0.3056 | 0.3880 |
| 1.0366 | 33.02 | 7000 | 0.2997 | 0.3665 |
| 0.9988 | 37.74 | 8000 | 0.2972 | 0.3653 |
| 0.9864 | 42.45 | 9000 | 0.2697 | 0.3371 |
| 0.9558 | 47.17 | 10000 | 0.2739 | 0.3141 |
| 0.9094 | 51.89 | 11000 | 0.2657 | 0.3533 |
| 0.9034 | 56.6 | 12000 | 0.2699 | 0.3397 |
| 0.8907 | 61.32 | 13000 | 0.2765 | 0.3470 |
| 0.8631 | 66.04 | 14000 | 0.2774 | 0.3346 |
| 0.8389 | 70.75 | 15000 | 0.2743 | 0.3365 |
| 0.8214 | 75.47 | 16000 | 0.2778 | 0.3201 |
| 0.8195 | 80.19 | 17000 | 0.2725 | 0.3286 |
| 0.7994 | 84.91 | 18000 | 0.2782 | 0.3315 |
| 0.7816 | 89.62 | 19000 | 0.2775 | 0.3363 |
| 0.7816 | 94.34 | 20000 | 0.2731 | 0.3278 |
| 0.7635 | 99.06 | 21000 | 0.2767 | 0.3259 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
lgris/wav2vec2-large-xlsr-coraa-portuguese-cv8
|
lgris
| 2022-02-10T23:23:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xlsr-coraa-portuguese-cv8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-coraa-portuguese-cv8
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1626
- Wer: 0.1365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5614 | 0.1 | 100 | 0.2542 | 0.1986 |
| 0.5181 | 0.19 | 200 | 0.2740 | 0.2146 |
| 0.5056 | 0.29 | 300 | 0.2472 | 0.2068 |
| 0.4747 | 0.39 | 400 | 0.2464 | 0.2166 |
| 0.4627 | 0.48 | 500 | 0.2277 | 0.2041 |
| 0.4403 | 0.58 | 600 | 0.2245 | 0.1977 |
| 0.4413 | 0.68 | 700 | 0.2156 | 0.1968 |
| 0.437 | 0.77 | 800 | 0.2102 | 0.1919 |
| 0.4305 | 0.87 | 900 | 0.2130 | 0.1864 |
| 0.4324 | 0.97 | 1000 | 0.2144 | 0.1902 |
| 0.4217 | 1.06 | 1100 | 0.2230 | 0.1891 |
| 0.3823 | 1.16 | 1200 | 0.2033 | 0.1774 |
| 0.3641 | 1.25 | 1300 | 0.2143 | 0.1830 |
| 0.3707 | 1.35 | 1400 | 0.2034 | 0.1793 |
| 0.3767 | 1.45 | 1500 | 0.2029 | 0.1823 |
| 0.3483 | 1.54 | 1600 | 0.1999 | 0.1740 |
| 0.3577 | 1.64 | 1700 | 0.1928 | 0.1728 |
| 0.3667 | 1.74 | 1800 | 0.1898 | 0.1726 |
| 0.3283 | 1.83 | 1900 | 0.1920 | 0.1688 |
| 0.3571 | 1.93 | 2000 | 0.1904 | 0.1649 |
| 0.3467 | 2.03 | 2100 | 0.1994 | 0.1648 |
| 0.3145 | 2.12 | 2200 | 0.1940 | 0.1682 |
| 0.3186 | 2.22 | 2300 | 0.1879 | 0.1571 |
| 0.3058 | 2.32 | 2400 | 0.1975 | 0.1678 |
| 0.3096 | 2.41 | 2500 | 0.1877 | 0.1589 |
| 0.2964 | 2.51 | 2600 | 0.1862 | 0.1568 |
| 0.3068 | 2.61 | 2700 | 0.1809 | 0.1588 |
| 0.3036 | 2.7 | 2800 | 0.1769 | 0.1573 |
| 0.3084 | 2.8 | 2900 | 0.1836 | 0.1524 |
| 0.3109 | 2.9 | 3000 | 0.1807 | 0.1519 |
| 0.2969 | 2.99 | 3100 | 0.1851 | 0.1516 |
| 0.2698 | 3.09 | 3200 | 0.1737 | 0.1490 |
| 0.2703 | 3.19 | 3300 | 0.1759 | 0.1457 |
| 0.2759 | 3.28 | 3400 | 0.1778 | 0.1471 |
| 0.2728 | 3.38 | 3500 | 0.1717 | 0.1462 |
| 0.2398 | 3.47 | 3600 | 0.1767 | 0.1451 |
| 0.256 | 3.57 | 3700 | 0.1742 | 0.1410 |
| 0.2712 | 3.67 | 3800 | 0.1674 | 0.1414 |
| 0.2648 | 3.76 | 3900 | 0.1717 | 0.1423 |
| 0.2576 | 3.86 | 4000 | 0.1672 | 0.1403 |
| 0.2504 | 3.96 | 4100 | 0.1683 | 0.1381 |
| 0.2406 | 4.05 | 4200 | 0.1685 | 0.1399 |
| 0.2403 | 4.15 | 4300 | 0.1656 | 0.1381 |
| 0.2233 | 4.25 | 4400 | 0.1687 | 0.1371 |
| 0.2546 | 4.34 | 4500 | 0.1642 | 0.1377 |
| 0.2431 | 4.44 | 4600 | 0.1655 | 0.1372 |
| 0.2337 | 4.54 | 4700 | 0.1625 | 0.1370 |
| 0.2607 | 4.63 | 4800 | 0.1618 | 0.1363 |
| 0.2292 | 4.73 | 4900 | 0.1622 | 0.1366 |
| 0.2232 | 4.83 | 5000 | 0.1626 | 0.1365 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
lgris/wav2vec2-large-xlsr-coraa-portuguese-cv7
|
lgris
| 2022-02-10T23:22:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"pt",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- pt
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-large-xlsr-coraa-portuguese-cv7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-coraa-portuguese-cv7
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1777
- Wer: 0.1339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4779 | 0.13 | 100 | 0.2620 | 0.2020 |
| 0.4505 | 0.26 | 200 | 0.2339 | 0.1998 |
| 0.4285 | 0.39 | 300 | 0.2507 | 0.2109 |
| 0.4148 | 0.52 | 400 | 0.2311 | 0.2101 |
| 0.4072 | 0.65 | 500 | 0.2278 | 0.1899 |
| 0.388 | 0.78 | 600 | 0.2193 | 0.1898 |
| 0.3952 | 0.91 | 700 | 0.2108 | 0.1901 |
| 0.3851 | 1.04 | 800 | 0.2121 | 0.1788 |
| 0.3496 | 1.17 | 900 | 0.2154 | 0.1776 |
| 0.3063 | 1.3 | 1000 | 0.2095 | 0.1730 |
| 0.3376 | 1.43 | 1100 | 0.2129 | 0.1801 |
| 0.3273 | 1.56 | 1200 | 0.2132 | 0.1776 |
| 0.3347 | 1.69 | 1300 | 0.2054 | 0.1698 |
| 0.323 | 1.82 | 1400 | 0.1986 | 0.1724 |
| 0.3079 | 1.95 | 1500 | 0.2005 | 0.1701 |
| 0.3029 | 2.08 | 1600 | 0.2159 | 0.1644 |
| 0.2694 | 2.21 | 1700 | 0.1992 | 0.1678 |
| 0.2733 | 2.34 | 1800 | 0.2032 | 0.1657 |
| 0.269 | 2.47 | 1900 | 0.2056 | 0.1592 |
| 0.2869 | 2.6 | 2000 | 0.2058 | 0.1616 |
| 0.2813 | 2.73 | 2100 | 0.1868 | 0.1584 |
| 0.2616 | 2.86 | 2200 | 0.1841 | 0.1550 |
| 0.2809 | 2.99 | 2300 | 0.1902 | 0.1577 |
| 0.2598 | 3.12 | 2400 | 0.1910 | 0.1514 |
| 0.24 | 3.25 | 2500 | 0.1971 | 0.1555 |
| 0.2481 | 3.38 | 2600 | 0.1853 | 0.1537 |
| 0.2437 | 3.51 | 2700 | 0.1897 | 0.1496 |
| 0.2384 | 3.64 | 2800 | 0.1842 | 0.1495 |
| 0.2405 | 3.77 | 2900 | 0.1884 | 0.1500 |
| 0.2372 | 3.9 | 3000 | 0.1950 | 0.1548 |
| 0.229 | 4.03 | 3100 | 0.1928 | 0.1477 |
| 0.2047 | 4.16 | 3200 | 0.1891 | 0.1472 |
| 0.2102 | 4.29 | 3300 | 0.1930 | 0.1473 |
| 0.199 | 4.42 | 3400 | 0.1914 | 0.1456 |
| 0.2121 | 4.55 | 3500 | 0.1840 | 0.1437 |
| 0.211 | 4.67 | 3600 | 0.1843 | 0.1403 |
| 0.2072 | 4.8 | 3700 | 0.1836 | 0.1428 |
| 0.2224 | 4.93 | 3800 | 0.1747 | 0.1412 |
| 0.1974 | 5.06 | 3900 | 0.1813 | 0.1416 |
| 0.1895 | 5.19 | 4000 | 0.1869 | 0.1406 |
| 0.1763 | 5.32 | 4100 | 0.1830 | 0.1394 |
| 0.2001 | 5.45 | 4200 | 0.1775 | 0.1394 |
| 0.1909 | 5.58 | 4300 | 0.1806 | 0.1373 |
| 0.1812 | 5.71 | 4400 | 0.1784 | 0.1359 |
| 0.1737 | 5.84 | 4500 | 0.1778 | 0.1353 |
| 0.1915 | 5.97 | 4600 | 0.1777 | 0.1349 |
| 0.1921 | 6.1 | 4700 | 0.1784 | 0.1359 |
| 0.1805 | 6.23 | 4800 | 0.1757 | 0.1348 |
| 0.1742 | 6.36 | 4900 | 0.1771 | 0.1341 |
| 0.1709 | 6.49 | 5000 | 0.1777 | 0.1339 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
lgris/wavlm-large-CORAA-pt-cv7
|
lgris
| 2022-02-10T23:16:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- pt
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wavlm-large-CORAA-pt-cv7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-large-CORAA-pt-cv7
This model is a fine-tuned version of [lgris/WavLM-large-CORAA-pt](https://huggingface.co/lgris/WavLM-large-CORAA-pt) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2546
- Wer: 0.2261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6029 | 0.13 | 100 | 0.3679 | 0.3347 |
| 0.5297 | 0.26 | 200 | 0.3516 | 0.3227 |
| 0.5134 | 0.39 | 300 | 0.3327 | 0.3167 |
| 0.4941 | 0.52 | 400 | 0.3281 | 0.3122 |
| 0.4816 | 0.65 | 500 | 0.3154 | 0.3102 |
| 0.4649 | 0.78 | 600 | 0.3199 | 0.3058 |
| 0.461 | 0.91 | 700 | 0.3047 | 0.2974 |
| 0.4613 | 1.04 | 800 | 0.3006 | 0.2900 |
| 0.4198 | 1.17 | 900 | 0.2951 | 0.2891 |
| 0.3864 | 1.3 | 1000 | 0.2989 | 0.2862 |
| 0.3963 | 1.43 | 1100 | 0.2932 | 0.2830 |
| 0.3953 | 1.56 | 1200 | 0.2936 | 0.2829 |
| 0.3962 | 1.69 | 1300 | 0.2952 | 0.2773 |
| 0.3811 | 1.82 | 1400 | 0.2915 | 0.2748 |
| 0.3736 | 1.95 | 1500 | 0.2839 | 0.2684 |
| 0.3507 | 2.08 | 1600 | 0.2914 | 0.2678 |
| 0.3277 | 2.21 | 1700 | 0.2895 | 0.2652 |
| 0.3344 | 2.34 | 1800 | 0.2843 | 0.2673 |
| 0.335 | 2.47 | 1900 | 0.2821 | 0.2635 |
| 0.3559 | 2.6 | 2000 | 0.2830 | 0.2599 |
| 0.3254 | 2.73 | 2100 | 0.2711 | 0.2577 |
| 0.3263 | 2.86 | 2200 | 0.2685 | 0.2546 |
| 0.3266 | 2.99 | 2300 | 0.2679 | 0.2521 |
| 0.3066 | 3.12 | 2400 | 0.2727 | 0.2526 |
| 0.2998 | 3.25 | 2500 | 0.2648 | 0.2537 |
| 0.2961 | 3.38 | 2600 | 0.2630 | 0.2519 |
| 0.3046 | 3.51 | 2700 | 0.2684 | 0.2506 |
| 0.3006 | 3.64 | 2800 | 0.2604 | 0.2492 |
| 0.2992 | 3.77 | 2900 | 0.2682 | 0.2508 |
| 0.2775 | 3.9 | 3000 | 0.2732 | 0.2440 |
| 0.2903 | 4.03 | 3100 | 0.2659 | 0.2427 |
| 0.2535 | 4.16 | 3200 | 0.2650 | 0.2433 |
| 0.2714 | 4.29 | 3300 | 0.2588 | 0.2394 |
| 0.2636 | 4.42 | 3400 | 0.2652 | 0.2434 |
| 0.2647 | 4.55 | 3500 | 0.2624 | 0.2371 |
| 0.2796 | 4.67 | 3600 | 0.2611 | 0.2373 |
| 0.2644 | 4.8 | 3700 | 0.2604 | 0.2341 |
| 0.2657 | 4.93 | 3800 | 0.2567 | 0.2331 |
| 0.2423 | 5.06 | 3900 | 0.2594 | 0.2322 |
| 0.2556 | 5.19 | 4000 | 0.2587 | 0.2323 |
| 0.2327 | 5.32 | 4100 | 0.2639 | 0.2299 |
| 0.2613 | 5.45 | 4200 | 0.2569 | 0.2310 |
| 0.2382 | 5.58 | 4300 | 0.2585 | 0.2298 |
| 0.2404 | 5.71 | 4400 | 0.2543 | 0.2287 |
| 0.2368 | 5.84 | 4500 | 0.2553 | 0.2286 |
| 0.2514 | 5.97 | 4600 | 0.2517 | 0.2279 |
| 0.2415 | 6.1 | 4700 | 0.2524 | 0.2270 |
| 0.2338 | 6.23 | 4800 | 0.2540 | 0.2265 |
| 0.219 | 6.36 | 4900 | 0.2549 | 0.2263 |
| 0.2428 | 6.49 | 5000 | 0.2546 | 0.2261 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
emre/wav2vec2-xls-r-300m-Turkish-Tr-small-CommonVoice8
|
emre
| 2022-02-10T22:57:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Turkish-Tr-small-CommonVoice8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Turkish-Tr-small-CommonVoice8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4813
- Wer: 0.7207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2 | 0.53 | 400 | 3.1949 | 0.9964 |
| 2.9387 | 1.07 | 800 | 2.5015 | 1.0337 |
| 1.5975 | 1.6 | 1200 | 1.0928 | 0.9945 |
| 1.0688 | 2.13 | 1600 | 0.8388 | 0.9390 |
| 0.8977 | 2.66 | 2000 | 0.7106 | 0.8889 |
| 0.789 | 3.2 | 2400 | 0.6051 | 0.8273 |
| 0.7116 | 3.73 | 2800 | 0.5580 | 0.7855 |
| 0.6576 | 4.26 | 3200 | 0.5033 | 0.7433 |
| 0.6002 | 4.79 | 3600 | 0.4813 | 0.7207 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
emre/wav2vec2-xls-r-300m-Turkish-Tr-small
|
emre
| 2022-02-10T22:55:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Turkish-Tr-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Turkish-Tr-small
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4375
- Wer: 0.5050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8735 | 4.21 | 400 | 2.8173 | 1.0002 |
| 1.0073 | 8.42 | 800 | 0.4981 | 0.6717 |
| 0.3395 | 12.63 | 1200 | 0.4470 | 0.5866 |
| 0.2254 | 16.84 | 1600 | 0.4349 | 0.5491 |
| 0.1648 | 21.05 | 2000 | 0.4454 | 0.5284 |
| 0.1325 | 25.26 | 2400 | 0.4552 | 0.5131 |
| 0.1102 | 29.47 | 2800 | 0.4375 | 0.5050 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
emre/wav2vec2-large-xlsr-53-W2V2-TR-MED
|
emre
| 2022-02-10T22:55:21Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-W2V2-TR-MED
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-W2V2-TR-MED
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4467
- Wer: 0.4598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1343 | 4.21 | 400 | 2.3674 | 1.0372 |
| 0.8075 | 8.42 | 800 | 0.4583 | 0.6308 |
| 0.3209 | 12.63 | 1200 | 0.4291 | 0.5531 |
| 0.2273 | 16.84 | 1600 | 0.4348 | 0.5378 |
| 0.1764 | 21.05 | 2000 | 0.4550 | 0.5326 |
| 0.148 | 25.26 | 2400 | 0.4839 | 0.5319 |
| 0.1268 | 29.47 | 2800 | 0.4515 | 0.5070 |
| 0.1113 | 33.68 | 3200 | 0.4590 | 0.4930 |
| 0.1025 | 37.89 | 3600 | 0.4546 | 0.4888 |
| 0.0922 | 42.11 | 4000 | 0.4782 | 0.4852 |
| 0.082 | 46.32 | 4400 | 0.4605 | 0.4752 |
| 0.0751 | 50.53 | 4800 | 0.4358 | 0.4689 |
| 0.0699 | 54.74 | 5200 | 0.4359 | 0.4629 |
| 0.0633 | 58.95 | 5600 | 0.4467 | 0.4598 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
squish/BertHarmon
|
squish
| 2022-02-10T21:28:51Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
thumbnail: "https://en.memesrandom.com/wp-content/uploads/2020/11/juega-ajedrez.jpeg"
widget:
- text: "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1 White <MOVE_SEP> [MASK]"
- example_title: Empty Board
- text: "6Q1/5k2/3P4/1R3p2/P4P2/7Q/6RK/8 b - - 2 60 Black <MOVE_SEP> [MASK]"
- example_title: Late Game Board
---
# BertHarmon
Research done at Johns Hopkins University by Michael DeLeo
Contact: [email protected]

## Introduction
BertHarmon is a BERT model trained for the task of Chess.

## Sample Usage
```python
from transformers import pipeline
task = pipeline('fill-mask', model='squish/BertHarmon')
task("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1 White <MOVE_SEP> [MASK]")
```
The base string consists of the FEN_position followed by the player color and a move seperator. Finally with the [MASK] token. The mask token is the algebraic notation for a chess move to be taken givent the current board state in FEN Notation
## Links
[Github](https://github.com/deleomike/NLP-Chess)
[HuggingFace](https://huggingface.co/squish/BertHarmon)
|
FuriouslyAsleep/markuplm-large-finetuned-qa
|
FuriouslyAsleep
| 2022-02-10T20:30:55Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"markuplm",
"question-answering",
"arxiv:2110.08518",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
# MarkupLM Large fine-tuned on WebSRC to allow Question Answering.
This model is adapted from Microsoft's MarkupLM. This fine-tuned model is the result of partially following instructions in the MarkupLM git repo (with adjustments described farther below under the Fine-tuning args section.) This version not endorsed by Microsoft.
Test the question answering out in the [Markup QA space here](https://huggingface.co/spaces/FuriouslyAsleep/markupQAdemo)
\---------------------------------------------------------------------------------
**Fine-tuned Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)**
## Introduction (From Microsoft MarkupLM Large Model Card)
MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
\---------------------------------------------------------------------------------
Fine-tuning args:
--per_gpu_train_batch_size 4 --warmup_ratio 0.1 --num_train_epochs 4
## Training was performed on only a small subset of the WebSRC:
\
The number of total websites is 60
The train websites list is ['ga09']
The test websites list is []
The dev websites list is ['ga12', 'ph04', 'au08', 'ga10', 'au01', 'bo17', 'mo02', 'jo11', 'sp09', 'sp10', 'ph03', 'ph01', 'un09', 'sp14', 'jo03', 'sp07', 'un07', 'bo07', 'mo04', 'bo09', 'jo10', 'un12', 're02', 'bo01', 'ca01', 'sp15', 'au12', 'un03', 're03', 'jo13', 'ph02', 'un10', 'au09', 'au10', 'un02', 'mo07', 'sp13', 'bo08', 'sp03', 're05', 'sp06', 'ca02', 'sp02', 'sp01', 'au03', 'sp11', 'mo06', 'bo10', 'un11', 'un06', 'ga01', 'un04', 'ph05', 'au11', 'sp12', 'jo05', 'sp04', 'jo12', 'sp08']
The number of processed websites is 60
\---------------------------------------------------------------------------------
Inference test here may not work. Use the transformers markuplm branch from [NielsRogge transformers markuplm branch](https://github.com/NielsRogge/transformers/tree/modeling_markuplm)
After installing from there, try the following model and tokenizer assignemnts (consider using a file for the tags dict)
model = MarkupLMForQuestionAnswering.from_pretrained("FuriouslyAsleep/markuplm-large-finetuned-qa")
tokenizer = MarkupLMTokenizer(
vocab_file="vocab.json",
merges_file="merges.txt",
tags_dict= {"a": 0, "abbr": 1, "acronym": 2, "address": 3, "altGlyph": 4, "altGlyphDef": 5, "altGlyphItem": 6, "animate": 7, "animateColor": 8, "animateMotion": 9, "animateTransform": 10, "applet": 11, "area": 12, "article": 13, "aside": 14, "audio": 15, "b": 16, "base": 17, "basefont": 18, "bdi": 19, "bdo": 20, "bgsound": 21, "big": 22, "blink": 23, "blockquote": 24, "body": 25, "br": 26, "button": 27, "canvas": 28, "caption": 29, "center": 30, "circle": 31, "cite": 32, "clipPath": 33, "code": 34, "col": 35, "colgroup": 36, "color-profile": 37, "content": 38, "cursor": 39, "data": 40, "datalist": 41, "dd": 42, "defs": 43, "del": 44, "desc": 45, "details": 46, "dfn": 47, "dialog": 48, "dir": 49, "div": 50, "dl": 51, "dt": 52, "ellipse": 53, "em": 54, "embed": 55, "feBlend": 56, "feColorMatrix": 57, "feComponentTransfer": 58, "feComposite": 59, "feConvolveMatrix": 60, "feDiffuseLighting": 61, "feDisplacementMap": 62, "feDistantLight": 63, "feFlood": 64, "feFuncA": 65, "feFuncB": 66, "feFuncG": 67, "feFuncR": 68, "feGaussianBlur": 69, "feImage": 70, "feMerge": 71, "feMergeNode": 72, "feMorphology": 73, "feOffset": 74, "fePointLight": 75, "feSpecularLighting": 76, "feSpotLight": 77, "feTile": 78, "feTurbulence": 79, "fieldset": 80, "figcaption": 81, "figure": 82, "filter": 83, "font-face-format": 84, "font-face-name": 85, "font-face-src": 86, "font-face-uri": 87, "font-face": 88, "font": 89, "footer": 90, "foreignObject": 91, "form": 92, "frame": 93, "frameset": 94, "g": 95, "glyph": 96, "glyphRef": 97, "h1": 98, "h2": 99, "h3": 100, "h4": 101, "h5": 102, "h6": 103, "head": 104, "header": 105, "hgroup": 106, "hkern": 107, "hr": 108, "html": 109, "i": 110, "iframe": 111, "image": 112, "img": 113, "input": 114, "ins": 115, "kbd": 116, "keygen": 117, "label": 118, "legend": 119, "li": 120, "line": 121, "linearGradient": 122, "link": 123, "main": 124, "map": 125, "mark": 126, "marker": 127, "marquee": 128, "mask": 129, "math": 130, "menu": 131, "menuitem": 132, "meta": 133, "metadata": 134, "meter": 135, "missing-glyph": 136, "mpath": 137, "nav": 138, "nobr": 139, "noembed": 140, "noframes": 141, "noscript": 142, "object": 143, "ol": 144, "optgroup": 145, "option": 146, "output": 147, "p": 148, "param": 149, "path": 150, "pattern": 151, "picture": 152, "plaintext": 153, "polygon": 154, "polyline": 155, "portal": 156, "pre": 157, "progress": 158, "q": 159, "radialGradient": 160, "rb": 161, "rect": 162, "rp": 163, "rt": 164, "rtc": 165, "ruby": 166, "s": 167, "samp": 168, "script": 169, "section": 170, "select": 171, "set": 172, "shadow": 173, "slot": 174, "small": 175, "source": 176, "spacer": 177, "span": 178, "stop": 179, "strike": 180, "strong": 181, "style": 182, "sub": 183, "summary": 184, "sup": 185, "svg": 186, "switch": 187, "symbol": 188, "table": 189, "tbody": 190, "td": 191, "template": 192, "text": 193, "textPath": 194, "textarea": 195, "tfoot": 196, "th": 197, "thead": 198, "time": 199, "title": 200, "tr": 201, "track": 202, "tref": 203, "tspan": 204, "tt": 205, "u": 206, "ul": 207, "use": 208, "var": 209, "video": 210, "view": 211, "vkern": 212, "wbr": 213, "xmp": 214},
add_prefix_space=True,)
Go to [https://github.com/uwts/ProjectRisk](https://github.com/uwts/ProjectRisk) for sample script.
|
Chiuchiyin/DialoGPT-small-Donald
|
Chiuchiyin
| 2022-02-10T20:16:00Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- conversational
---
Donald Trump DialoGPT Model built by following tutorial by [Ruolin Zheng](https://youtu.be/Rk8eM1p_xgM).
The data used for training was 2020 presidential debate.
More work is needed to optimize it. I don't have access to larger VRAM.
|
skhurana/test_model
|
skhurana
| 2022-02-10T16:28:36Z | 0 | 0 | null |
[
"pytorch",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Hugging-face testing
---
language:
- "List of ISO 639-1 code for your language"
- lang1
- lang2
thumbnail: "url to a thumbnail used in social sharing"
tags:
- PyTorch
license: apache-2.0
datasets:
- dataset1
- dataset2
metrics:
- metric1
---
|
satyaalmasian/temporal_tagger_German_GELECTRA
|
satyaalmasian
| 2022-02-10T15:23:51Z | 61 | 1 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# BERT based temporal tagged
Token classifier for temporal tagging of plain text using German Gelectra model.
# Model description
GELECTRA is a transformer (ELECTRA) model pretrained on a large corpus of German data in a self-supervised fashion. We use GELECTRA for token classification to tag the tokens in text with classes (tags are from english timex3 format):
```
O -- outside of a tag
I-TIME -- inside tag of time
B-TIME -- beginning tag of time
I-DATE -- inside tag of date
B-DATE -- beginning tag of date
I-DURATION -- inside tag of duration
B-DURATION -- beginning tag of duration
I-SET -- inside tag of the set
B-SET -- beginning tag of the set
```
# Intended uses & limitations
This model is best used accompanied with code from the [repository](https://github.com/satya77/Transformer_Temporal_Tagger). Especially for inference, the direct output might be noisy and hard to decipher, in the repository we provide alignment functions and voting strategies for the final output. The repo examples the english models, the german model can be used the same way.
# How to use
you can load the model as follows:
```
tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_German_GELECTRA", use_fast=False)
model = BertForTokenClassification.from_pretrained("satyaalmasian/temporal_tagger_German_GELECTRA")
```
for inference use:
```
processed_text = tokenizer(input_text, return_tensors="pt")
result = model(**processed_text)
classification= result[0]
```
for an example with post-processing, refer to the [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
We provide a function `merge_tokens` to decipher the output.
to further fine-tune, use the `Trainer` from hugginface. An example of a similar fine-tuning can be found [here](https://github.com/satya77/Transformer_Temporal_Tagger/blob/master/run_token_classifier.py).
# Training data
For pre-training we use a large corpus of automatically annotated news articles with heideltime.
We use 2 data sources for fine-tunning. :
[Tempeval-3](https://www.cs.york.ac.uk/semeval-2013/task1/index.php%3Fid=data.html),automatically translated to gemran,
[KRAUTS dataset](https://github.com/JannikStroetgen/KRAUTS).
# Training procedure
The model is trained from publicly available checkpoints on huggingface (`deepset/gelectra-large`), with a batch size of 192. We use a learning rate of 1e-07 with an Adam optimizer and linear weight decay for pretraining.
For fine-tuning we use a batch size of 16. We use a learning rate of 5e-05 with an Adam optimizer and linear weight decay.
We fine-tune with 3 different random seeds, this version of the model is the only seed=7.
For training, we use 2 NVIDIA A100 GPUs with 40GB of memory.
|
ajaiswal1008/wav2vec2-large-xls-r-300m-hi-colab_new
|
ajaiswal1008
| 2022-02-10T15:11:14Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hi-colab_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-colab_new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
am-shb/bert-base-multilingual-uncased-pretrained
|
am-shb
| 2022-02-10T14:49:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-32-1
|
SetFit
| 2022-02-10T11:56:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-32-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-32-1
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4201
- Accuracy: 0.8759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7162 | 1.0 | 13 | 0.6832 | 0.5385 |
| 0.6561 | 2.0 | 26 | 0.7270 | 0.4615 |
| 0.4685 | 3.0 | 39 | 1.0674 | 0.5385 |
| 0.2837 | 4.0 | 52 | 1.0841 | 0.5385 |
| 0.1129 | 5.0 | 65 | 0.3502 | 0.9231 |
| 0.0118 | 6.0 | 78 | 0.4829 | 0.9231 |
| 0.0022 | 7.0 | 91 | 0.7430 | 0.8462 |
| 0.0007 | 8.0 | 104 | 0.8219 | 0.8462 |
| 0.0005 | 9.0 | 117 | 0.8787 | 0.8462 |
| 0.0003 | 10.0 | 130 | 0.8713 | 0.8462 |
| 0.0003 | 11.0 | 143 | 0.8473 | 0.8462 |
| 0.0002 | 12.0 | 156 | 0.8482 | 0.8462 |
| 0.0002 | 13.0 | 169 | 0.8494 | 0.8462 |
| 0.0002 | 14.0 | 182 | 0.8638 | 0.8462 |
| 0.0002 | 15.0 | 195 | 0.8492 | 0.8462 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-8-6
|
SetFit
| 2022-02-10T09:46:57Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-6
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4331
- Accuracy: 0.7106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6486 | 1.0 | 3 | 0.7901 | 0.25 |
| 0.6418 | 2.0 | 6 | 0.9259 | 0.25 |
| 0.6169 | 3.0 | 9 | 1.0574 | 0.25 |
| 0.5639 | 4.0 | 12 | 1.1372 | 0.25 |
| 0.4562 | 5.0 | 15 | 0.6090 | 0.5 |
| 0.3105 | 6.0 | 18 | 0.4435 | 1.0 |
| 0.2303 | 7.0 | 21 | 0.2804 | 1.0 |
| 0.1388 | 8.0 | 24 | 0.2205 | 1.0 |
| 0.0918 | 9.0 | 27 | 0.1282 | 1.0 |
| 0.0447 | 10.0 | 30 | 0.0643 | 1.0 |
| 0.0297 | 11.0 | 33 | 0.0361 | 1.0 |
| 0.0159 | 12.0 | 36 | 0.0211 | 1.0 |
| 0.0102 | 13.0 | 39 | 0.0155 | 1.0 |
| 0.0061 | 14.0 | 42 | 0.0158 | 1.0 |
| 0.0049 | 15.0 | 45 | 0.0189 | 1.0 |
| 0.0035 | 16.0 | 48 | 0.0254 | 1.0 |
| 0.0027 | 17.0 | 51 | 0.0305 | 1.0 |
| 0.0021 | 18.0 | 54 | 0.0287 | 1.0 |
| 0.0016 | 19.0 | 57 | 0.0215 | 1.0 |
| 0.0016 | 20.0 | 60 | 0.0163 | 1.0 |
| 0.0014 | 21.0 | 63 | 0.0138 | 1.0 |
| 0.0015 | 22.0 | 66 | 0.0131 | 1.0 |
| 0.001 | 23.0 | 69 | 0.0132 | 1.0 |
| 0.0014 | 24.0 | 72 | 0.0126 | 1.0 |
| 0.0011 | 25.0 | 75 | 0.0125 | 1.0 |
| 0.001 | 26.0 | 78 | 0.0119 | 1.0 |
| 0.0008 | 27.0 | 81 | 0.0110 | 1.0 |
| 0.0007 | 28.0 | 84 | 0.0106 | 1.0 |
| 0.0008 | 29.0 | 87 | 0.0095 | 1.0 |
| 0.0009 | 30.0 | 90 | 0.0089 | 1.0 |
| 0.0008 | 31.0 | 93 | 0.0083 | 1.0 |
| 0.0007 | 32.0 | 96 | 0.0075 | 1.0 |
| 0.0008 | 33.0 | 99 | 0.0066 | 1.0 |
| 0.0006 | 34.0 | 102 | 0.0059 | 1.0 |
| 0.0007 | 35.0 | 105 | 0.0054 | 1.0 |
| 0.0008 | 36.0 | 108 | 0.0051 | 1.0 |
| 0.0007 | 37.0 | 111 | 0.0049 | 1.0 |
| 0.0007 | 38.0 | 114 | 0.0047 | 1.0 |
| 0.0006 | 39.0 | 117 | 0.0045 | 1.0 |
| 0.0006 | 40.0 | 120 | 0.0046 | 1.0 |
| 0.0005 | 41.0 | 123 | 0.0045 | 1.0 |
| 0.0006 | 42.0 | 126 | 0.0044 | 1.0 |
| 0.0006 | 43.0 | 129 | 0.0043 | 1.0 |
| 0.0006 | 44.0 | 132 | 0.0044 | 1.0 |
| 0.0005 | 45.0 | 135 | 0.0045 | 1.0 |
| 0.0006 | 46.0 | 138 | 0.0043 | 1.0 |
| 0.0006 | 47.0 | 141 | 0.0043 | 1.0 |
| 0.0006 | 48.0 | 144 | 0.0041 | 1.0 |
| 0.0007 | 49.0 | 147 | 0.0042 | 1.0 |
| 0.0005 | 50.0 | 150 | 0.0042 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.