modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
kevinzyz/chinese_roberta_L-12_H-768-finetuned-MC-hyper | d9fbeaab01bb46ec60542e42fc1149e8e39bdbc9 | 2021-12-09T04:48:15.000Z | [
"pytorch",
"bert",
"multiple-choice",
"transformers"
] | multiple-choice | false | kevinzyz | null | kevinzyz/chinese_roberta_L-12_H-768-finetuned-MC-hyper | 0 | null | transformers | 35,500 | Entry not found |
kevinzyz/chinese_roberta_L-2_H-128-finetuned-MC-hyper | 7c5490211a3b69e6297937f2ace5fbf4b9b87b17 | 2021-12-09T13:11:53.000Z | [
"pytorch",
"bert",
"multiple-choice",
"transformers"
] | multiple-choice | false | kevinzyz | null | kevinzyz/chinese_roberta_L-2_H-128-finetuned-MC-hyper | 0 | null | transformers | 35,501 | Entry not found |
khizon/greek-speech-emotion-classifier-demo | f214ac6debe63f935a812ef62603481410772912 | 2022-01-09T12:49:04.000Z | [
"pytorch",
"wav2vec2",
"transformers"
] | null | false | khizon | null | khizon/greek-speech-emotion-classifier-demo | 0 | null | transformers | 35,502 | Entry not found |
kika2000/wav2vec2-large-xls-r-300m-kika3_my-colab | dec99539624deeaebb9f2315b40ed8bd38221549 | 2022-01-25T23:11:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | kika2000 | null | kika2000/wav2vec2-large-xls-r-300m-kika3_my-colab | 0 | null | transformers | 35,503 | Entry not found |
kika2000/wav2vec2-large-xls-r-300m-kika_my-colab | 03143862fc5e7eb24e2198dbf32be7a57f47c5d6 | 2022-01-25T04:10:14.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | kika2000 | null | kika2000/wav2vec2-large-xls-r-300m-kika_my-colab | 0 | null | transformers | 35,504 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-kika_my-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kika_my-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3300
- Wer: 0.5804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.8067 | 4.82 | 400 | 1.2892 | 0.8886 |
| 0.3048 | 9.64 | 800 | 1.2285 | 0.6797 |
| 0.1413 | 14.46 | 1200 | 1.1970 | 0.6509 |
| 0.1047 | 19.28 | 1600 | 1.3628 | 0.6166 |
| 0.0799 | 24.1 | 2000 | 1.3345 | 0.6014 |
| 0.0638 | 28.92 | 2400 | 1.3300 | 0.5804 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
kika2000/wav2vec2-large-xls-r-300m-test80_my-colab | b7a39f206ad59f0c37a86450b8e30c6707987647 | 2022-01-31T12:19:35.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | kika2000 | null | kika2000/wav2vec2-large-xls-r-300m-test80_my-colab | 0 | null | transformers | 35,505 | Entry not found |
kika2000/wav2vec2-large-xls-r-300m-test81_my-colab | 4fd6a06af3ae843deccacc9b8308903eb2214381 | 2022-02-04T11:14:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | kika2000 | null | kika2000/wav2vec2-large-xls-r-300m-test81_my-colab | 0 | null | transformers | 35,506 | Entry not found |
kikumaru818/easy_algebra | c2e5c667c8e9d527213c1fa443a9bfdc2345c446 | 2021-11-29T00:37:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | kikumaru818 | null | kikumaru818/easy_algebra | 0 | null | transformers | 35,507 | Entry not found |
kiyoung2/dpr_p-encoder_roberta-small | fe440fea034a4085323c18ee5eef5558763096d5 | 2021-10-29T02:38:33.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | kiyoung2 | null | kiyoung2/dpr_p-encoder_roberta-small | 0 | null | transformers | 35,508 | Entry not found |
kiyoung2/dpr_q-encoder_roberta-small | 66437541310a6e3793a9328bcbab521f8439d507 | 2021-10-29T02:38:21.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | kiyoung2 | null | kiyoung2/dpr_q-encoder_roberta-small | 0 | null | transformers | 35,509 | Entry not found |
kizunasunhy/fnet-base-finetuned-ner | 8adc76990574cc21f9bbc12e8d4254f827de6ad2 | 2021-10-15T09:33:49.000Z | [
"pytorch",
"fnet",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | kizunasunhy | null | kizunasunhy/fnet-base-finetuned-ner | 0 | null | transformers | 35,510 | Entry not found |
kmfoda/output_dir | cb4ec67680c0b3f78f114901695306a31a7e563f | 2022-02-01T11:06:56.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | kmfoda | null | kmfoda/output_dir | 0 | null | transformers | 35,511 | Entry not found |
koala/bert-large-uncased-ko | 0cb0428a1f08779ce778fe535d4ce91bea912270 | 2021-12-10T08:29:40.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/bert-large-uncased-ko | 0 | null | transformers | 35,512 | Entry not found |
koala/xlm-roberta-large-bn | f7e8ea3f443589a5f5af64c9736ecab943f466ec | 2022-01-05T13:05:16.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/xlm-roberta-large-bn | 0 | null | transformers | 35,513 | Entry not found |
koala/xlm-roberta-large-de | 08dcb3ce3813f966a5ef6a290d03c1e828b1f078 | 2021-12-06T18:16:37.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/xlm-roberta-large-de | 0 | null | transformers | 35,514 | Entry not found |
koala/xlm-roberta-large-hi | 018b51c821cddccd515436221a7432d3db0f8882 | 2021-12-21T12:58:05.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/xlm-roberta-large-hi | 0 | null | transformers | 35,515 | Entry not found |
koala/xlm-roberta-large-zh | cbb7d05a8c7243a658ce51dfe4bd28da19feb775 | 2021-12-06T18:26:29.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/xlm-roberta-large-zh | 0 | null | transformers | 35,516 | Entry not found |
korca/bert-base-mm-cased | 6c72fa51242a8a17a4a9e94024f7699f2858e3a6 | 2021-09-15T07:41:23.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | korca | null | korca/bert-base-mm-cased | 0 | null | transformers | 35,517 | Entry not found |
korca/meaning-match-bert-base | 8410e651f39d3ea55a401d75474e83cd401de3ba | 2021-11-23T08:35:37.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | korca | null | korca/meaning-match-bert-base | 0 | null | transformers | 35,518 | Entry not found |
korca/meaning-match-bert-large | 8a3eaeb1ba7708e0dcd7a954f35b9445b767d0ea | 2021-11-18T17:52:44.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | korca | null | korca/meaning-match-bert-large | 0 | null | transformers | 35,519 | Entry not found |
korca/meaning-match-electra-large | 81b94142c53bb4a3b4d0f8e24592f346aa1e3f91 | 2021-11-29T08:40:30.000Z | [
"pytorch",
"electra",
"feature-extraction",
"transformers"
] | feature-extraction | false | korca | null | korca/meaning-match-electra-large | 0 | null | transformers | 35,520 | Entry not found |
korca/roberta-base-mm | c04ee9a2161fd67746f2e269250c1ac9b35cf4ca | 2021-09-14T07:46:00.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | korca | null | korca/roberta-base-mm | 0 | null | transformers | 35,521 | Entry not found |
kp17/DialoGPT-small-tonystark | 9107fc295d77cb38eeca18513c88d3be8ad0e1af | 2021-08-27T06:44:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"Conversational"
] | text-generation | false | kp17 | null | kp17/DialoGPT-small-tonystark | 0 | null | transformers | 35,522 | ---
tags:
- Conversational
---
# Tony Stark DialoGPT Model |
kr0n0s/AssameseBert | fdebb1792a5f876cf41f4955fb0327ede381bb9b | 2021-07-29T20:24:48.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | kr0n0s | null | kr0n0s/AssameseBert | 0 | null | transformers | 35,523 | Entry not found |
krevas/finance-electra-small-discriminator | 139aedb657a7a7d8fc9024d4bc93346fbb8302cc | 2020-07-09T05:46:38.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | krevas | null | krevas/finance-electra-small-discriminator | 0 | null | transformers | 35,524 | Entry not found |
kris/DialoGPT-small-spock4 | eb11e5daa35ffee8b279a529c0b0b9cefe46c0c3 | 2021-09-23T14:16:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | kris | null | kris/DialoGPT-small-spock4 | 0 | null | transformers | 35,525 | ---
tags:
- conversational
---
#Spock model |
kris/DialoGPT-small-spock5 | bf3db76f28871513f4b28ceeb22000d81eeb8802 | 2021-09-23T15:12:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | kris | null | kris/DialoGPT-small-spock5 | 0 | null | transformers | 35,526 | ---
tags:
- conversational
---
#Spock model |
krupine/telectra-discriminator | 6900ceca63aeb79f9cecd817ce4c2568d6ab45ef | 2021-01-22T08:41:00.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | krupine | null | krupine/telectra-discriminator | 0 | null | transformers | 35,527 | Entry not found |
kshitiz/testing-bot-repo | 7ffcf35a9318f6c9767547481afcf3bb1a545509 | 2021-11-09T06:58:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | kshitiz | null | kshitiz/testing-bot-repo | 0 | null | transformers | 35,528 | ---
tags:
- conversational
---
#testing bot Model |
ksinar/DialoGPT-small-morty | b784f81dda6602e541b82686245eff8e810dc9da | 2021-08-28T14:40:58.000Z | [
"pytorch"
] | null | false | ksinar | null | ksinar/DialoGPT-small-morty | 0 | null | null | 35,529 | Entry not found |
kumakino/fairy-tale-gpt2-small | 5b672e6d77bbf42ce54be3790acd7b620428c8b3 | 2021-12-13T23:07:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | kumakino | null | kumakino/fairy-tale-gpt2-small | 0 | null | transformers | 35,530 | Entry not found |
kunalbhargava/DialoGPT-small-housebot | 20299e2aa368dbb004427976aaffa6829877de9d | 2021-11-11T09:00:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | kunalbhargava | null | kunalbhargava/DialoGPT-small-housebot | 0 | null | transformers | 35,531 | ---
tags:
- conversational
---
#House BOT |
kvothe28/DiabloGPT-small-Rick | 50535a92cb645b553224cacfa17c2fc1f124eed4 | 2021-09-03T21:16:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | kvothe28 | null | kvothe28/DiabloGPT-small-Rick | 0 | null | transformers | 35,532 | ---
tags:
- conversational
---
# Rick DiabloGPT Model |
kwang2049/SBERT-base-nli-stsb-v2 | b97422efcaadd383da9a33f423e33271aa8d2047 | 2021-08-30T13:30:32.000Z | [
"pytorch"
] | null | false | kwang2049 | null | kwang2049/SBERT-base-nli-stsb-v2 | 0 | null | null | 35,533 | Entry not found |
kwang2049/SBERT-base-nli-v2 | f38abd2c15aacf7a3993754e72d4f1d0bb6b7843 | 2021-08-30T13:29:35.000Z | [
"pytorch"
] | null | false | kwang2049 | null | kwang2049/SBERT-base-nli-v2 | 0 | null | null | 35,534 | Entry not found |
kwang2049/TSDAE-askubuntu | a7c794a288383693c964479710f519b30ae8321e | 2021-10-25T16:17:47.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"transformers"
] | feature-extraction | false | kwang2049 | null | kwang2049/TSDAE-askubuntu | 0 | null | transformers | 35,535 | # kwang2049/TSDAE-askubuntu2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on AskUbuntu in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on AskUbuntu with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
kwang2049/TSDAE-cqadupstack | 9e5000f269f165a79392a858ee60653bc2cb634f | 2021-10-25T16:18:29.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"transformers"
] | feature-extraction | false | kwang2049 | null | kwang2049/TSDAE-cqadupstack | 0 | null | transformers | 35,536 | # kwang2049/TSDAE-cqadupstack2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on cqadupstack in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on cqadupstack with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
kwang2049/TSDAE-scidocs | d2f136c580be5d551232344d1f62b3ccec264d02 | 2021-10-25T16:19:04.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"transformers"
] | feature-extraction | false | kwang2049 | null | kwang2049/TSDAE-scidocs | 0 | null | transformers | 35,537 | # kwang2049/TSDAE-scidocs2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on scidocs in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on scidocs with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'scidocs'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'scidocs'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
kwang2049/TSDAE-scidocs2nli_stsb | 425ea67713d31c033f16c5e5a17a80de1ba7cc5a | 2021-10-25T16:15:23.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"transformers"
] | feature-extraction | false | kwang2049 | null | kwang2049/TSDAE-scidocs2nli_stsb | 0 | null | transformers | 35,538 | # kwang2049/TSDAE-scidocs2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain scidocs. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on scidocs with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'scidocs'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'scidocs'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
kwang2049/TSDAE-twitterpara | ae49bd6bb98ef1d586a6aee2a345b52a526749e8 | 2021-10-25T16:18:44.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"transformers"
] | feature-extraction | false | kwang2049 | null | kwang2049/TSDAE-twitterpara | 0 | null | transformers | 35,539 | # kwang2049/TSDAE-twitterpara2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on twitterpara in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on twitterpara with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
l53513955/False_Entity_Identifier | a259b8994fdb142623f147b2ca6733ac76082492 | 2021-12-20T05:48:31.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | l53513955 | null | l53513955/False_Entity_Identifier | 0 | null | transformers | 35,540 | Entry not found |
lagodw/plotly_gpt2_large | 66d42bca940e070d2d846457bbf784d101dc8dd9 | 2021-10-06T22:33:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | lagodw | null | lagodw/plotly_gpt2_large | 0 | null | transformers | 35,541 | Entry not found |
lagodw/reddit_bert | 1a706487954953e865e6652a5030051d57105bab | 2021-09-04T19:12:32.000Z | [
"pytorch",
"bert",
"next-sentence-prediction",
"transformers"
] | null | false | lagodw | null | lagodw/reddit_bert | 0 | 1 | transformers | 35,542 | Entry not found |
lalopey/benn_eifert | a7f2d28e0b18b8fb490ff783e77ad9e557577bfd | 2021-05-23T06:25:18.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | lalopey | null | lalopey/benn_eifert | 0 | null | transformers | 35,543 | Entry not found |
lapacc33/DialoGPT-medium-rick | c358130c33a6a9d51ecbcabe43ace1a64bd6bb98 | 2021-10-29T05:39:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | lapacc33 | null | lapacc33/DialoGPT-medium-rick | 0 | null | transformers | 35,544 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
larcane/kogpt2-cat-diary | d4dbb048874c85408313922dd78e7c9b867312ed | 2021-12-18T15:45:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | larcane | null | larcane/kogpt2-cat-diary | 0 | null | transformers | 35,545 | Entry not found |
laugustyniak/roberta-polish-web-embedding-v1 | ea9478a6bd3a99efb45cea4cd9adb635b2f7df3f | 2021-05-20T17:37:19.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | laugustyniak | null | laugustyniak/roberta-polish-web-embedding-v1 | 0 | null | transformers | 35,546 | Entry not found |
laxya007/gpt2_BRM | fd100258c6e8867b5d81efa8befbc1933ab48d78 | 2021-10-23T08:23:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | laxya007 | null | laxya007/gpt2_BRM | 0 | null | transformers | 35,547 | Entry not found |
laxya007/gpt2_BSA_Leg_ipr_OE_OS | 8cae9788ca2d14ed1a183231970bacde4fadfe4f | 2021-06-18T08:40:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | laxya007 | null | laxya007/gpt2_BSA_Leg_ipr_OE_OS | 0 | null | transformers | 35,548 | Entry not found |
lbh020300/mymodel007 | 0d94893f4efee21622a8a660129f802193507171 | 2021-11-02T16:06:03.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | lbh020300 | null | lbh020300/mymodel007 | 0 | null | transformers | 35,549 | Entry not found |
lee1jun/wav2vec2-base-100h-finetuned | 2b6bcfc36d916596b911d93cd26fc92b28984023 | 2021-07-06T10:01:40.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lee1jun | null | lee1jun/wav2vec2-base-100h-finetuned | 0 | null | transformers | 35,550 | Entry not found |
leemii18/robustqa-baseline-02 | 245beb64ac2fe26658f32a30cbca6ebc5118901c | 2021-05-05T17:47:41.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | leemii18 | null | leemii18/robustqa-baseline-02 | 0 | null | transformers | 35,551 | Entry not found |
lewtun/dummy-translation | c2d443e0dccb8040e1f4520aec93db447df7b2d8 | 2021-07-13T12:43:13.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | false | lewtun | null | lewtun/dummy-translation | 0 | null | transformers | 35,552 | ---
tags:
- generated_from_trainer
model_index:
- name: dummy-translation
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dummy-translation
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
|
lewtun/marian-finetuned-kde4-en-to-fr | e51aad09934acd4a5d0bb38d0686bedbab7840c4 | 2021-11-14T16:59:34.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | lewtun | null | lewtun/marian-finetuned-kde4-en-to-fr | 0 | null | transformers | 35,553 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 38.988820814501665
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6772
- Bleu: 38.9888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
lewtun/metnet-test-2 | e55e5fd8e175b9679cdcdc21416afff44475d861 | 2021-09-06T10:42:38.000Z | [
"pytorch",
"transformers"
] | null | false | lewtun | null | lewtun/metnet-test-2 | 0 | null | transformers | 35,554 | Entry not found |
lewtun/metnet-test-3 | 0ff97bde804713a4ad00899d2862f74586306ba2 | 2021-09-06T10:53:04.000Z | [
"pytorch",
"transformers",
"autonlp",
"evaluation",
"benchmark"
] | null | false | lewtun | null | lewtun/metnet-test-3 | 0 | null | transformers | 35,555 | ---
tags:
- autonlp
- evaluation
- benchmark
---
# Model Card for MetNet
|
lewtun/metnet-test-5 | 12fe5921c2c4bef26138ec8d3d34b27c0ebd70bd | 2021-09-06T11:01:50.000Z | [
"pytorch",
"transformers",
"satflow",
"license:mit"
] | null | false | lewtun | null | lewtun/metnet-test-5 | 0 | null | transformers | 35,556 | ---
license: mit
tags:
- satflow
---
# MetNet
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
lewtun/metnet-test | 2cb8dbf9b3e001519cb4b6206f93cd62d9ded316 | 2021-09-06T09:22:37.000Z | [
"pytorch"
] | null | false | lewtun | null | lewtun/metnet-test | 0 | null | null | 35,557 | Entry not found |
lewtun/minilm-finetuned-imdb-accelerate | da6f67e725230781b225a986e40c9e283bcb537e | 2021-09-29T08:48:14.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | lewtun | null | lewtun/minilm-finetuned-imdb-accelerate | 0 | null | transformers | 35,558 | Entry not found |
lewtun/mt5-finetuned-amazon-en-es-accelerate | dfe5bea91037602120869439cb6f63cd259c91e4 | 2021-11-11T15:12:52.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lewtun | null | lewtun/mt5-finetuned-amazon-en-es-accelerate | 0 | null | transformers | 35,559 | Entry not found |
lg/fexp_1 | 1ca4f9b9a7238ffa70bc0a46c3a72ea75f81fdf5 | 2021-05-20T23:37:11.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | lg | null | lg/fexp_1 | 0 | null | transformers | 35,560 | # This model is probably not what you're looking for. |
lg/fexp_2 | ae084e2eb00c3cba0af49853fc3694d321c8a4a6 | 2021-05-01T17:56:11.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | lg | null | lg/fexp_2 | 0 | null | transformers | 35,561 | Entry not found |
lg/fexp_7 | 0f948d3e89789b81f90fbcf1b69aed53afa0269e | 2021-05-03T05:27:39.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | lg | null | lg/fexp_7 | 0 | null | transformers | 35,562 | Entry not found |
lg/fexp_8 | e438f00780887eed802bcbf528b4e788760d0aaf | 2021-05-02T16:58:34.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | lg | null | lg/fexp_8 | 0 | null | transformers | 35,563 | Entry not found |
lg/ghpy_2k | 14f17bce6bfdfd7e8217d599b125a4ac1dc32c3c | 2021-05-14T16:27:41.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | lg | null | lg/ghpy_2k | 0 | null | transformers | 35,564 | Entry not found |
lg/ghpy_40k | 8994fc0eae81882834ca1c11d7847efd2a9db012 | 2021-05-20T23:37:47.000Z | [
"pytorch"
] | null | false | lg | null | lg/ghpy_40k | 0 | null | null | 35,565 | # This model is probably not what you're looking for. |
lgris/bp-commonvoice10-xlsr | 163b0e6c4474f107e62193dae59882cfc73b537c | 2021-11-27T21:02:56.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"PyTorch",
"license:apache-2.0"
] | automatic-speech-recognition | false | lgris | null | lgris/bp-commonvoice10-xlsr | 0 | null | transformers | 35,566 | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
---
# commonvoice10-xlsr: Wav2vec 2.0 with Common Voice Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [Common Voice 7.0](https://commonvoice.mozilla.org/pt) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | | -- | 5.4h |
| Common Voice | 37.8h | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total | | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| commonvoice10 (demonstration below) | 0.133 | 0.189 | 0.165 | 0.189 | 0.247 | 0.474 | 0.251 | 0.235 |
| commonvoice10 + 4-gram (demonstration below) | 0.060 | 0.117 | 0.088 | 0.136 | 0.181 | 0.394 | 0.227 | 0.171 |
## Demonstration
```python
MODEL_NAME = "lgris/commonvoice10-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.13291846056190185
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.18909733896486755
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.1655429292929293
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.1894711228284466
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.2471983709551264
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.4739658565194102
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.2510294913419914
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.060609303416680915
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.11758415681158373
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.08815340909090909
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.1359966791836458
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.1818429601530829
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.39469326522731385
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.22779897186147183
|
lgris/bp-lapsbm1-xlsr | 905303347f0caffc6a8b13abc00177eedbf9e4ce | 2021-11-27T21:07:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"PyTorch",
"license:apache-2.0"
] | automatic-speech-recognition | false | lgris | null | lgris/bp-lapsbm1-xlsr | 0 | null | transformers | 35,567 | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
---
# lapsbm1-xlsr: Wav2vec 2.0 with LaPSBM Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [LaPS BM](https://github.com/falabrasil/gitlab-resources) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | | -- | 5.4h |
| Common Voice | | -- | 9.5h |
| LaPS BM | 0.8h | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total | | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| lapsbm1\_100 (demonstration below) | 0.111 | 0.418 | 0.145 | 0.299 | 0.562 | 0.580 | 0.469 | 0.369 |
| lapsbm1\_100 + 4-gram (demonstration below) | 0.061 | 0.305 | 0.089 | 0.201 | 0.452 | 0.525 | 0.381 | 0.287 |
## Demonstration
```python
MODEL_NAME = "lgris/lapsbm1-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.11147816967489037
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.41880890234535906
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.1451893939393939
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.29958960206171104
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.5626767414610376
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.5807549973642049
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.4693479437229436
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.06157628194513477
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.3051714756833442
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.0893623737373737
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.20062044237806004
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.4522665618175908
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.5256707813182246
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.38106331168831165
|
lgris/bp-sid10-xlsr | 4284d50d1d0b6561e63615dc1585d9425db2f03d | 2021-11-27T21:09:42.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"PyTorch",
"license:apache-2.0"
] | automatic-speech-recognition | false | lgris | null | lgris/bp-sid10-xlsr | 0 | null | transformers | 35,568 | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
---
# sid10-xlsr: Wav2vec 2.0 with Sidney Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [Sidney](https://igormq.github.io/datasets/) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | | -- | 5.4h |
| Common Voice | | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | 7.2h | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total | 7.2h| -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| sid\_10 (demonstration below) |0.186 | 0.327 | 0.207 | 0.505 | 0.124 | 0.835 | 0.472 | 0.379|
| sid\_10 + 4-gram (demonstration below) |0.096 | 0.223 | 0.115 | 0.432 | 0.101 | 0.791 | 0.348 | 0.301|
## Demonstration
```python
MODEL_NAME = "lgris/sid10-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.18623689076557778
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.3279775395502392
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.20780303030303032
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.5056711598536057
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.1247776617710105
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.8350609256842175
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.47242153679653687
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.09677271347353278
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.22363215674470321
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.1154924242424242
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.4322369152606427
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.10080313085145765
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.7911789829264236
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.34786255411255407
|
lgris/distilxlsr_bp_16-24 | 61102aab99832521692f62cdd8a5f9e4ac914047 | 2021-12-30T00:38:16.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"pt",
"arxiv:2110.01900",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | lgris | null | lgris/distilxlsr_bp_16-24 | 0 | null | transformers | 35,569 | ---
language: pt
tags:
- speech
license: apache-2.0
---
# DistilXLSR-53 for BP
[DistilXLSR-53 for BP: DistilHuBERT applied to Wav2vec XLSR-53 for Brazilian Portuguese](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900)
Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee
**Note 2**: The XLSR-53 model was distilled using [Brazilian Portuguese Datasets](https://huggingface.co/lgris/bp400-xlsr) for test purposes. The dataset is quite small to perform such task (the performance might not be so good as the [original work](https://arxiv.org/abs/2110.01900)).
**Abstract**
Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech.
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model.
|
lgris/distilxlsr_bp_4-12 | 61d69f4659aa8e30bce84daf6f9769f16dfcd68a | 2021-12-30T00:38:04.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"pt",
"arxiv:2110.01900",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | lgris | null | lgris/distilxlsr_bp_4-12 | 0 | null | transformers | 35,570 | ---
language: pt
tags:
- speech
license: apache-2.0
---
# DistilXLSR-53 for BP
[DistilXLSR-53 for BP: DistilHuBERT applied to Wav2vec XLSR-53 for Brazilian Portuguese](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900)
Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee
**Note 2**: The XLSR-53 model was distilled using [Brazilian Portuguese Datasets](https://huggingface.co/lgris/bp400-xlsr) for test purposes. The dataset is quite small to perform such task (the performance might not be so good as the [original work](https://arxiv.org/abs/2110.01900)).
**Abstract**
Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech.
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model.
|
lgris/distilxlsr_bp_8-12-24 | 1af2b82d5420a218887d21a032cc37cbadd16842 | 2021-12-30T00:37:34.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"pt",
"arxiv:2110.01900",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | lgris | null | lgris/distilxlsr_bp_8-12-24 | 0 | null | transformers | 35,571 | ---
language: pt
tags:
- speech
license: apache-2.0
---
# DistilXLSR-53 for BP
[DistilXLSR-53 for BP: DistilHuBERT applied to Wav2vec XLSR-53 for Brazilian Portuguese](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900)
Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee
**Note 2**: The XLSR-53 model was distilled using [Brazilian Portuguese Datasets](https://huggingface.co/lgris/bp400-xlsr) for test purposes. The dataset is quite small to perform such task (the performance might not be so good as the [original work](https://arxiv.org/abs/2110.01900)).
**Abstract**
Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech.
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model.
|
lgris/wav2vec2-large-xls-r-300m-pt-cv | 3906b54f22780a919093296fa94edf627b1926a3 | 2022-03-24T11:52:39.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lgris | null | lgris/wav2vec2-large-xls-r-300m-pt-cv | 0 | null | transformers | 35,572 | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- robust-speech-event
- pt
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-pt-cv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 24.29
- name: Test CER
type: cer
value: 7.51
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 55.72
- name: Test CER
type: cer
value: 21.82
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pt
metrics:
- name: Test WER
type: wer
value: 47.88
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pt
metrics:
- name: Test WER
type: wer
value: 50.78
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-pt-cv
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3418
- Wer: 0.3581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.9035 | 0.2 | 100 | 4.2750 | 1.0 |
| 3.3275 | 0.41 | 200 | 3.0334 | 1.0 |
| 3.0016 | 0.61 | 300 | 2.9494 | 1.0 |
| 2.1874 | 0.82 | 400 | 1.4355 | 0.8721 |
| 1.09 | 1.02 | 500 | 0.9987 | 0.7165 |
| 0.8251 | 1.22 | 600 | 0.7886 | 0.6406 |
| 0.6927 | 1.43 | 700 | 0.6753 | 0.5801 |
| 0.6143 | 1.63 | 800 | 0.6300 | 0.5509 |
| 0.5451 | 1.84 | 900 | 0.5586 | 0.5156 |
| 0.5003 | 2.04 | 1000 | 0.5493 | 0.5027 |
| 0.3712 | 2.24 | 1100 | 0.5271 | 0.4872 |
| 0.3486 | 2.45 | 1200 | 0.4953 | 0.4817 |
| 0.3498 | 2.65 | 1300 | 0.4619 | 0.4538 |
| 0.3112 | 2.86 | 1400 | 0.4570 | 0.4387 |
| 0.3013 | 3.06 | 1500 | 0.4437 | 0.4147 |
| 0.2136 | 3.27 | 1600 | 0.4176 | 0.4124 |
| 0.2131 | 3.47 | 1700 | 0.4281 | 0.4194 |
| 0.2099 | 3.67 | 1800 | 0.3864 | 0.3949 |
| 0.1925 | 3.88 | 1900 | 0.3926 | 0.3913 |
| 0.1709 | 4.08 | 2000 | 0.3764 | 0.3804 |
| 0.1406 | 4.29 | 2100 | 0.3787 | 0.3742 |
| 0.1342 | 4.49 | 2200 | 0.3645 | 0.3693 |
| 0.1305 | 4.69 | 2300 | 0.3463 | 0.3625 |
| 0.1298 | 4.9 | 2400 | 0.3418 | 0.3581 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
lgris/wav2vec2-large-xlsr-coraa-portuguese-cv7 | b1331c0703c2bc32fbaf46d1e12d00d3e990e8b5 | 2022-02-10T23:22:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"pt",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lgris | null | lgris/wav2vec2-large-xlsr-coraa-portuguese-cv7 | 0 | null | transformers | 35,573 | ---
license: apache-2.0
tags:
- generated_from_trainer
- pt
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-large-xlsr-coraa-portuguese-cv7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-coraa-portuguese-cv7
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1777
- Wer: 0.1339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4779 | 0.13 | 100 | 0.2620 | 0.2020 |
| 0.4505 | 0.26 | 200 | 0.2339 | 0.1998 |
| 0.4285 | 0.39 | 300 | 0.2507 | 0.2109 |
| 0.4148 | 0.52 | 400 | 0.2311 | 0.2101 |
| 0.4072 | 0.65 | 500 | 0.2278 | 0.1899 |
| 0.388 | 0.78 | 600 | 0.2193 | 0.1898 |
| 0.3952 | 0.91 | 700 | 0.2108 | 0.1901 |
| 0.3851 | 1.04 | 800 | 0.2121 | 0.1788 |
| 0.3496 | 1.17 | 900 | 0.2154 | 0.1776 |
| 0.3063 | 1.3 | 1000 | 0.2095 | 0.1730 |
| 0.3376 | 1.43 | 1100 | 0.2129 | 0.1801 |
| 0.3273 | 1.56 | 1200 | 0.2132 | 0.1776 |
| 0.3347 | 1.69 | 1300 | 0.2054 | 0.1698 |
| 0.323 | 1.82 | 1400 | 0.1986 | 0.1724 |
| 0.3079 | 1.95 | 1500 | 0.2005 | 0.1701 |
| 0.3029 | 2.08 | 1600 | 0.2159 | 0.1644 |
| 0.2694 | 2.21 | 1700 | 0.1992 | 0.1678 |
| 0.2733 | 2.34 | 1800 | 0.2032 | 0.1657 |
| 0.269 | 2.47 | 1900 | 0.2056 | 0.1592 |
| 0.2869 | 2.6 | 2000 | 0.2058 | 0.1616 |
| 0.2813 | 2.73 | 2100 | 0.1868 | 0.1584 |
| 0.2616 | 2.86 | 2200 | 0.1841 | 0.1550 |
| 0.2809 | 2.99 | 2300 | 0.1902 | 0.1577 |
| 0.2598 | 3.12 | 2400 | 0.1910 | 0.1514 |
| 0.24 | 3.25 | 2500 | 0.1971 | 0.1555 |
| 0.2481 | 3.38 | 2600 | 0.1853 | 0.1537 |
| 0.2437 | 3.51 | 2700 | 0.1897 | 0.1496 |
| 0.2384 | 3.64 | 2800 | 0.1842 | 0.1495 |
| 0.2405 | 3.77 | 2900 | 0.1884 | 0.1500 |
| 0.2372 | 3.9 | 3000 | 0.1950 | 0.1548 |
| 0.229 | 4.03 | 3100 | 0.1928 | 0.1477 |
| 0.2047 | 4.16 | 3200 | 0.1891 | 0.1472 |
| 0.2102 | 4.29 | 3300 | 0.1930 | 0.1473 |
| 0.199 | 4.42 | 3400 | 0.1914 | 0.1456 |
| 0.2121 | 4.55 | 3500 | 0.1840 | 0.1437 |
| 0.211 | 4.67 | 3600 | 0.1843 | 0.1403 |
| 0.2072 | 4.8 | 3700 | 0.1836 | 0.1428 |
| 0.2224 | 4.93 | 3800 | 0.1747 | 0.1412 |
| 0.1974 | 5.06 | 3900 | 0.1813 | 0.1416 |
| 0.1895 | 5.19 | 4000 | 0.1869 | 0.1406 |
| 0.1763 | 5.32 | 4100 | 0.1830 | 0.1394 |
| 0.2001 | 5.45 | 4200 | 0.1775 | 0.1394 |
| 0.1909 | 5.58 | 4300 | 0.1806 | 0.1373 |
| 0.1812 | 5.71 | 4400 | 0.1784 | 0.1359 |
| 0.1737 | 5.84 | 4500 | 0.1778 | 0.1353 |
| 0.1915 | 5.97 | 4600 | 0.1777 | 0.1349 |
| 0.1921 | 6.1 | 4700 | 0.1784 | 0.1359 |
| 0.1805 | 6.23 | 4800 | 0.1757 | 0.1348 |
| 0.1742 | 6.36 | 4900 | 0.1771 | 0.1341 |
| 0.1709 | 6.49 | 5000 | 0.1777 | 0.1339 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
lgris/wavlm-large-CORAA-pt-cv7 | abbdddf9b74b4637df78675b0e3a657c190a77bc | 2022-02-10T23:16:09.000Z | [
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lgris | null | lgris/wavlm-large-CORAA-pt-cv7 | 0 | null | transformers | 35,574 | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- pt
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wavlm-large-CORAA-pt-cv7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-large-CORAA-pt-cv7
This model is a fine-tuned version of [lgris/WavLM-large-CORAA-pt](https://huggingface.co/lgris/WavLM-large-CORAA-pt) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2546
- Wer: 0.2261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6029 | 0.13 | 100 | 0.3679 | 0.3347 |
| 0.5297 | 0.26 | 200 | 0.3516 | 0.3227 |
| 0.5134 | 0.39 | 300 | 0.3327 | 0.3167 |
| 0.4941 | 0.52 | 400 | 0.3281 | 0.3122 |
| 0.4816 | 0.65 | 500 | 0.3154 | 0.3102 |
| 0.4649 | 0.78 | 600 | 0.3199 | 0.3058 |
| 0.461 | 0.91 | 700 | 0.3047 | 0.2974 |
| 0.4613 | 1.04 | 800 | 0.3006 | 0.2900 |
| 0.4198 | 1.17 | 900 | 0.2951 | 0.2891 |
| 0.3864 | 1.3 | 1000 | 0.2989 | 0.2862 |
| 0.3963 | 1.43 | 1100 | 0.2932 | 0.2830 |
| 0.3953 | 1.56 | 1200 | 0.2936 | 0.2829 |
| 0.3962 | 1.69 | 1300 | 0.2952 | 0.2773 |
| 0.3811 | 1.82 | 1400 | 0.2915 | 0.2748 |
| 0.3736 | 1.95 | 1500 | 0.2839 | 0.2684 |
| 0.3507 | 2.08 | 1600 | 0.2914 | 0.2678 |
| 0.3277 | 2.21 | 1700 | 0.2895 | 0.2652 |
| 0.3344 | 2.34 | 1800 | 0.2843 | 0.2673 |
| 0.335 | 2.47 | 1900 | 0.2821 | 0.2635 |
| 0.3559 | 2.6 | 2000 | 0.2830 | 0.2599 |
| 0.3254 | 2.73 | 2100 | 0.2711 | 0.2577 |
| 0.3263 | 2.86 | 2200 | 0.2685 | 0.2546 |
| 0.3266 | 2.99 | 2300 | 0.2679 | 0.2521 |
| 0.3066 | 3.12 | 2400 | 0.2727 | 0.2526 |
| 0.2998 | 3.25 | 2500 | 0.2648 | 0.2537 |
| 0.2961 | 3.38 | 2600 | 0.2630 | 0.2519 |
| 0.3046 | 3.51 | 2700 | 0.2684 | 0.2506 |
| 0.3006 | 3.64 | 2800 | 0.2604 | 0.2492 |
| 0.2992 | 3.77 | 2900 | 0.2682 | 0.2508 |
| 0.2775 | 3.9 | 3000 | 0.2732 | 0.2440 |
| 0.2903 | 4.03 | 3100 | 0.2659 | 0.2427 |
| 0.2535 | 4.16 | 3200 | 0.2650 | 0.2433 |
| 0.2714 | 4.29 | 3300 | 0.2588 | 0.2394 |
| 0.2636 | 4.42 | 3400 | 0.2652 | 0.2434 |
| 0.2647 | 4.55 | 3500 | 0.2624 | 0.2371 |
| 0.2796 | 4.67 | 3600 | 0.2611 | 0.2373 |
| 0.2644 | 4.8 | 3700 | 0.2604 | 0.2341 |
| 0.2657 | 4.93 | 3800 | 0.2567 | 0.2331 |
| 0.2423 | 5.06 | 3900 | 0.2594 | 0.2322 |
| 0.2556 | 5.19 | 4000 | 0.2587 | 0.2323 |
| 0.2327 | 5.32 | 4100 | 0.2639 | 0.2299 |
| 0.2613 | 5.45 | 4200 | 0.2569 | 0.2310 |
| 0.2382 | 5.58 | 4300 | 0.2585 | 0.2298 |
| 0.2404 | 5.71 | 4400 | 0.2543 | 0.2287 |
| 0.2368 | 5.84 | 4500 | 0.2553 | 0.2286 |
| 0.2514 | 5.97 | 4600 | 0.2517 | 0.2279 |
| 0.2415 | 6.1 | 4700 | 0.2524 | 0.2270 |
| 0.2338 | 6.23 | 4800 | 0.2540 | 0.2265 |
| 0.219 | 6.36 | 4900 | 0.2549 | 0.2263 |
| 0.2428 | 6.49 | 5000 | 0.2546 | 0.2261 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
liaad/srl-enpt_xlmr-base | da8cf09e9d7ec9758e2531387b3003114cf9cd9b | 2021-09-22T08:56:20.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"multilingual",
"pt",
"en",
"dataset:PropBank.Br",
"dataset:CoNLL-2012",
"arxiv:2101.01213",
"transformers",
"xlm-roberta-base",
"semantic role labeling",
"finetuned",
"license:apache-2.0"
] | feature-extraction | false | liaad | null | liaad/srl-enpt_xlmr-base | 0 | null | transformers | 35,575 | ---
language:
- multilingual
- pt
- en
tags:
- xlm-roberta-base
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
metrics:
- F1 Measure
---
# XLM-R base fine-tune in English and Portuguese semantic role labeling
## Model description
This model is the [`xlm-roberta-base`](https://huggingface.co/xlm-roberta-base) fine-tuned first on the English CoNLL formatted OntoNotes v5.0 semantic role labeling data and then fine-tuned on the PropBank.Br data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-enpt_xlmr-base")
model = AutoModel.from_pretrained("liaad/srl-enpt_xlmr-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
## Training procedure
The model was first fine-tuned on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data; then it was fine-tuned in the PropBank.Br dataset using 10-fold Cross-Validation. The resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
liaad/srl-enpt_xlmr-large | 25990972fccbf1783c9ffad016cb7fa19c2f6e73 | 2021-09-22T08:56:23.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"multilingual",
"pt",
"en",
"dataset:PropBank.Br",
"dataset:CoNLL-2012",
"arxiv:2101.01213",
"transformers",
"xlm-roberta-large",
"semantic role labeling",
"finetuned",
"license:apache-2.0"
] | feature-extraction | false | liaad | null | liaad/srl-enpt_xlmr-large | 0 | null | transformers | 35,576 | ---
language:
- multilingual
- pt
- en
tags:
- xlm-roberta-large
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
metrics:
- F1 Measure
---
# XLM-R large fine-tuned in English and Portuguese semantic role labeling
## Model description
This model is the [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large) fine-tuned first on the English CoNLL formatted OntoNotes v5.0 semantic role labeling data and then fine-tuned on the PropBank.Br data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-enpt_xlmr-large")
model = AutoModel.from_pretrained("liaad/srl-enpt_xlmr-large")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
## Training procedure
The model was first fine-tuned on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data; then it was fine-tuned in the PropBank.Br dataset using 10-fold Cross-Validation. The resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
life4free96/DialogGPT-med-TeiaMoranta | ff21b49f9ea3598fd3650ea1da98cdb741b1f83b | 2021-11-11T12:07:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | life4free96 | null | life4free96/DialogGPT-med-TeiaMoranta | 0 | null | transformers | 35,577 | ---
tags:
- conversational
---
#Teia Moranta |
light/small-rickk | e1989ea7092f9666e7d923e7c85aa20e736c6ecf | 2021-09-15T18:38:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | light | null | light/small-rickk | 0 | null | transformers | 35,578 | ---
tags:
- conversational
---
#rick sanchez |
lilitket/wav2vec2-large-xls-r-300m-turkish-colab | b66d79874e2ea37e805d29b53bb857ee011ef5df | 2022-02-24T18:57:13.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lilitket | null | lilitket/wav2vec2-large-xls-r-300m-turkish-colab | 0 | null | transformers | 35,579 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7126
- Wer: 0.8198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 120
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 6.7419 | 2.38 | 200 | 3.1913 | 1.0 |
| 3.0446 | 4.76 | 400 | 2.3247 | 1.0 |
| 1.3163 | 7.14 | 600 | 1.2629 | 0.9656 |
| 0.6058 | 9.52 | 800 | 1.2203 | 0.9343 |
| 0.3687 | 11.9 | 1000 | 1.2157 | 0.8849 |
| 0.2644 | 14.29 | 1200 | 1.3693 | 0.8992 |
| 0.2147 | 16.67 | 1400 | 1.3321 | 0.8623 |
| 0.1962 | 19.05 | 1600 | 1.3476 | 0.8886 |
| 0.1631 | 21.43 | 1800 | 1.3984 | 0.8755 |
| 0.15 | 23.81 | 2000 | 1.4602 | 0.8798 |
| 0.1311 | 26.19 | 2200 | 1.4727 | 0.8836 |
| 0.1174 | 28.57 | 2400 | 1.5257 | 0.8805 |
| 0.1155 | 30.95 | 2600 | 1.4697 | 0.9337 |
| 0.1046 | 33.33 | 2800 | 1.6076 | 0.8667 |
| 0.1063 | 35.71 | 3000 | 1.5012 | 0.8861 |
| 0.0996 | 38.1 | 3200 | 1.6204 | 0.8605 |
| 0.088 | 40.48 | 3400 | 1.4788 | 0.8586 |
| 0.089 | 42.86 | 3600 | 1.5983 | 0.8648 |
| 0.0805 | 45.24 | 3800 | 1.5045 | 0.8298 |
| 0.0718 | 47.62 | 4000 | 1.6361 | 0.8611 |
| 0.0718 | 50.0 | 4200 | 1.5088 | 0.8548 |
| 0.0649 | 52.38 | 4400 | 1.5491 | 0.8554 |
| 0.0685 | 54.76 | 4600 | 1.5939 | 0.8442 |
| 0.0588 | 57.14 | 4800 | 1.6321 | 0.8536 |
| 0.0591 | 59.52 | 5000 | 1.6468 | 0.8442 |
| 0.0529 | 61.9 | 5200 | 1.6086 | 0.8661 |
| 0.0482 | 64.29 | 5400 | 1.6622 | 0.8517 |
| 0.0396 | 66.67 | 5600 | 1.6191 | 0.8436 |
| 0.0463 | 69.05 | 5800 | 1.6231 | 0.8661 |
| 0.0415 | 71.43 | 6000 | 1.6874 | 0.8511 |
| 0.0383 | 73.81 | 6200 | 1.7054 | 0.8411 |
| 0.0411 | 76.19 | 6400 | 1.7073 | 0.8486 |
| 0.0346 | 78.57 | 6600 | 1.7137 | 0.8342 |
| 0.0318 | 80.95 | 6800 | 1.6523 | 0.8329 |
| 0.0299 | 83.33 | 7000 | 1.6893 | 0.8579 |
| 0.029 | 85.71 | 7200 | 1.7162 | 0.8429 |
| 0.025 | 88.1 | 7400 | 1.7589 | 0.8529 |
| 0.025 | 90.48 | 7600 | 1.7581 | 0.8398 |
| 0.0232 | 92.86 | 7800 | 1.8459 | 0.8442 |
| 0.0215 | 95.24 | 8000 | 1.7942 | 0.8448 |
| 0.0222 | 97.62 | 8200 | 1.6848 | 0.8442 |
| 0.0179 | 100.0 | 8400 | 1.7223 | 0.8298 |
| 0.0176 | 102.38 | 8600 | 1.7426 | 0.8404 |
| 0.016 | 104.76 | 8800 | 1.7501 | 0.8411 |
| 0.0153 | 107.14 | 9000 | 1.7185 | 0.8235 |
| 0.0136 | 109.52 | 9200 | 1.7250 | 0.8292 |
| 0.0117 | 111.9 | 9400 | 1.7159 | 0.8185 |
| 0.0123 | 114.29 | 9600 | 1.7135 | 0.8248 |
| 0.0121 | 116.67 | 9800 | 1.7189 | 0.8210 |
| 0.0116 | 119.05 | 10000 | 1.7126 | 0.8198 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
limivan/DialoGPT-small-c3po | bcc7b306c371668e90f83440dbbf67f6243b0a13 | 2021-08-27T12:40:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | limivan | null | limivan/DialoGPT-small-c3po | 0 | null | transformers | 35,580 | ---
tags:
- conversational
---
#C3PO DialoGPT Model |
lincoln/2021twitchfr-conv-bert-small-mlm-simcse | 1e612dcc79b987a80bb69608ad2d2318d93b7042 | 2022-01-07T18:00:43.000Z | [
"pytorch",
"convbert",
"feature-extraction",
"fr",
"sentence-transformers",
"sentence-similarity",
"transformers",
"twitch",
"license:mit"
] | sentence-similarity | false | lincoln | null | lincoln/2021twitchfr-conv-bert-small-mlm-simcse | 0 | 1 | sentence-transformers | 35,581 | ---
language:
- fr
license: mit
pipeline_tag: sentence-similarity
widget:
- source_sentence: "Bonsoir"
sentences:
- "Salut !"
- "Hello"
- "Bonsoir!"
- "Bonsouar!"
- "Bonsouar !"
- "De rien"
- "LUL LUL"
example_title: "Coucou"
- source_sentence: "elle s'en sort bien"
sentences:
- "elle a raison"
- "elle a tellement raison"
- "Elle a pas tort"
- "C'est bien ce qu'elle dit là"
- "Hello"
example_title: "Raison or not"
- source_sentence: "et la question énergétique n'est pas politique ?"
sentences:
- "C'est le nucléaire militaire qui a entaché le nucléaire pour l'énergie."
- "La fusion nucléaire c'est pas pour maintenant malheureusement"
- "le pro nucléaire redevient acceptable à gauche j'ai l'impression"
- "La mer à Nantes?"
- "c'est bien un olivier pour l'upr"
- "Moi je vois juste sa lavallière"
example_title: "Nucléaire"
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- twitch
- convbert
---
## Modèle de représentation d'un message Twitch à l'aide de ConvBERT
Modèle [sentence-transformers](https://www.SBERT.net): cela permet de mapper une séquence de texte en un vecteur numérique de dimension 256 et peut être utilisé pour des tâches de clustering ou de recherche sémantique.
L'expérimentation menée au sein de Lincoln avait pour principal objectif de mettre en œuvre des techniques NLP from scratch sur un corpus de messages issus d’un chat Twitch. Ces derniers sont exprimés en français, mais sur une plateforme internet avec le vocabulaire internet que cela implique (fautes, vocabulaire communautaires, abréviations, anglicisme, emotes, ...).
Après avoir entrainé un modèle `ConvBert` puis `MLM` (cf section smodèles), nous avons entrainé un modèle _sentence-transformers_ à l'aide du framework d'apprentissage [SimCSE](https://www.sbert.net/examples/unsupervised_learning/SimCSE/README.html) en non supervisée.
L'objectif est de spécialiser la moyenne des tokens _CLS_ de chaque token de la séquence pour représenter un vecteur numérique cohérent avec l'ensemble du corpus. _SimCSE_ crée fictivement des exemples positifs et négatifs supervisées à l'aide du dropout pour revenir à une tâche classique.
_Nous garantissons pas la stabilité du modèle sur le long terme. Modèle réalisé dans le cadre d'un POC._
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('2021twitchfr-conv-bert-small-mlm-simcse')
embeddings = model.encode(sentences)
print(embeddings)
```
## Semantic Textual Similarity
```python
from sentence_transformers import SentenceTransformer, models, util
# Two lists of sentences
sentences1 = ['zackFCZack',
'Team bons petits plats',
'sa commence a quelle heure de base popcorn ?',
'BibleThump']
sentences2 = ['zack titulaire',
'salade de pates c une dinguerie',
'ça commence à être long la',
'NotLikeThis']
# Compute embedding for both lists
embeddings1 = model.encode(sentences1, convert_to_tensor=True)
embeddings2 = model.encode(sentences2, convert_to_tensor=True)
# Compute cosine-similarits
cosine_scores = util.cos_sim(embeddings1, embeddings2)
# Output the pairs with their score
for i in range(len(sentences1)):
print("Score: {:.4f} | \"{}\" -vs- \"{}\" ".format(cosine_scores[i][i], sentences1[i], sentences2[i]))
# Score: 0.5783 | "zackFCZack" -vs- "zack titulaire"
# Score: 0.2881 | "Team bons petits plats" -vs- "salade de pates c une dinguerie"
# Score: 0.4529 | "sa commence a quelle heure de base popcorn ?" -vs- "ça commence à être long la"
# Score: 0.5805 | "BibleThump" -vs- "NotLikeThis"
```
## Entrainement
* 500 000 messages twitchs échantillonnés (cf description données des modèles de bases)
* Batch size: 24
* Epochs: 24
* Loss: MultipleNegativesRankingLoss
_A noter:_
* _ConvBert a été entrainé avec un longueur de 128 tokens max, mais est utilisé pour 512 dans ce modèle. Pas de problème._
* _La loss d'apprentissage n'est pas encore disponible: peu de visibilité sur les performances._
L'ensemble du code d'entrainement sur le github public [lincoln/twitchatds](https://github.com/Lincoln-France/twitchatds).
## Application:
Nous avons utilisé une approche détournée de [BERTopic](https://maartengr.github.io/BERTopic/) pour réaliser un clustering d'un stream en prenant en compte la dimension temporelle: i.e. le nombre de seconde écoulée depuis le début du stream.

Globalement, l'approche donnes des résultats satisfaisant pour identifier des messages dit "similaires" récurrents. L'approche en revanche est fortement influencée par la ponctuation et la structure d'un message. Cela est largement explicable par le manque d'entrainement de l'ensemble des modèles et une volumétrie faible.
### Clustering émission "Backseat":
Entre 19h30 et 20h00:

🎞️ en vidéo: [youtu.be/EcjvlE9aTls](https://youtu.be/EcjvlE9aTls)
### Exemple regroupement émission "PopCorn":
```txt
-------------------- LABEL 106 --------------------
circus (0.88)/sulli (0.23)/connu (0.19)/jure (0.12)/aime (0.11)
silouhette moyenne: 0.04
-------------------- LABEL 106 --------------------
2021-03-30 20:10:22 0.01: les gosse c est des animaux
2021-03-30 20:12:11 -0.03: oue c connu
2021-03-30 20:14:15 0.03: oh le circus !! <3
2021-03-30 20:14:19 0.12: le circus l'anciennnee
2021-03-30 20:14:22 0.06: jure le circus !
2021-03-30 20:14:27 -0.03: le sulli
2021-03-30 20:14:31 0.09: le circus??? j'aime po
2021-03-30 20:14:34 0.11: le Circus, hors de prix !
2021-03-30 20:14:35 -0.09: le Paddock a Rignac en Aveyron
2021-03-30 20:14:39 0.11: le circus ><
2021-03-30 20:14:39 0.04: le Titty Twister de Besançon
-------------------- LABEL 17 --------------------
pates (0.12)/riz (0.09)/pâtes (0.09)/salade (0.07)/emission (0.07)
silouhette moyenne: -0.05
-------------------- LABEL 17 --------------------
2021-03-30 20:11:18 -0.03: Des nanimaux trop beaux !
2021-03-30 20:13:11 -0.01: episode des simpsons ça...
2021-03-30 20:13:41 -0.01: des le debut d'emission ca tue mdrrrrr
2021-03-30 20:13:50 0.03: des "lasagnes"
2021-03-30 20:14:37 -0.18: poubelle la vie
2021-03-30 20:15:13 0.03: Une omelette
2021-03-30 20:15:35 -0.19: salade de bite
2021-03-30 20:15:36 -0.00: hahaha ce gastronome
2021-03-30 20:15:43 -0.08: salade de pates c une dinguerie
2021-03-30 20:17:00 -0.11: Une bonne femme !
2021-03-30 20:17:06 -0.05: bouffe des graines
2021-03-30 20:17:08 -0.06: des pokeball ?
2021-03-30 20:17:11 -0.12: le choux fleur cru
2021-03-30 20:17:15 0.05: des pockeball ?
2021-03-30 20:17:27 -0.00: du chou fleur crue
2021-03-30 20:17:36 -0.09: un râgout de Meynia !!!!
2021-03-30 20:17:43 -0.07: une line up Sa rd o ch Zack Ponce my dream
2021-03-30 20:17:59 -0.10: Pâtes/10
2021-03-30 20:18:09 -0.05: Team bons petits plats
2021-03-30 20:18:13 -0.10: pate level
2021-03-30 20:18:19 -0.03: que des trucs très basiques
2021-03-30 20:18:24 0.03: des pates et du jambon c'est de la cuisine?
2021-03-30 20:18:30 0.05: Des pates et du riz ouai
2021-03-30 20:18:37 -0.02: des gnocchis à la poele c'est cuisiner ?
2021-03-30 20:18:50 -0.03: Pâtes à pizzas, pulled pork, carbonade flamande, etc..
2021-03-30 20:19:01 -0.11: Des pâtes ou du riz ça compte ?
2021-03-30 20:19:22 -0.21: le noob
2021-03-30 20:19:47 -0.02: Une bonne escalope de milanaise les gars
2021-03-30 20:20:05 -0.04: faites des gratins et des quiches
-------------------- LABEL 67 --------------------
1 1 (0.25)/1 (0.19)/ (0.0)/ (0.0)/ (0.0)
silouhette moyenne: 0.96
-------------------- LABEL 67 --------------------
2021-03-30 20:24:17 0.94: +1
2021-03-30 20:24:37 0.97: +1
2021-03-30 20:24:37 0.97: +1
2021-03-30 20:24:38 0.97: +1
2021-03-30 20:24:39 0.97: +1
2021-03-30 20:24:43 0.97: +1
2021-03-30 20:24:44 0.97: +1
2021-03-30 20:24:47 0.97: +1
2021-03-30 20:24:49 0.97: +1
2021-03-30 20:25:00 0.97: +1
2021-03-30 20:25:21 0.95: +1
2021-03-30 20:25:25 0.95: +1
2021-03-30 20:25:28 0.94: +1
2021-03-30 20:25:30 0.94: +1
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: ConvBertModel
(1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Modèles:
* [2021twitchfr-conv-bert-small](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small)
* [2021twitchfr-conv-bert-small-mlm](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm)
* [2021twitchfr-conv-bert-small-mlm-simcse](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm-simcse) |
linyi/dummy-model | d254ef8a8bdb3e86752fc45c0d8ce9995c23fb82 | 2021-11-07T00:42:27.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | linyi | null | linyi/dummy-model | 0 | null | transformers | 35,582 | Entry not found |
lkh4317/gpt2_fairy_tale | 9072d3366c57083192390f449429a234628a8aee | 2022-02-02T23:19:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | lkh4317 | null | lkh4317/gpt2_fairy_tale | 0 | null | transformers | 35,583 | Entry not found |
logicbloke/wav2vec2-large-xlsr-53-arabic | e0ab8005d9072404d8768d16c35c030519acd5e0 | 2021-07-06T10:09:12.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | logicbloke | null | logicbloke/wav2vec2-large-xlsr-53-arabic | 0 | null | transformers | 35,584 | Entry not found |
logube/DialogGPT_small_harrypotter | d318f5c4d2cae946034ea8531f43217b48a56c22 | 2021-08-27T23:18:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | logube | null | logube/DialogGPT_small_harrypotter | 0 | null | transformers | 35,585 | ---
tags:
- conversational
---
# harry potter DialogGPT Model |
lonewanderer27/KeitaroBot | 088feb8381efb70f435363bea297a7c19c7b483e | 2022-02-12T16:15:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | lonewanderer27 | null | lonewanderer27/KeitaroBot | 0 | null | transformers | 35,586 | ---
tags:
- conversational
---
# Camp Buddy - Keitaro - DialoGPTSmall Model |
longcld/t5-base-squad-visquad-aqg | 503405836758f1f9a44bd1b18ecb81510305f9a5 | 2021-09-08T01:36:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | longcld | null | longcld/t5-base-squad-visquad-aqg | 0 | null | transformers | 35,587 | Entry not found |
longcld/t5-small-itranslate-visquad-aqg | c2b786e4f1db69d5a660db67876eb07711653300 | 2021-08-19T08:55:39.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | longcld | null | longcld/t5-small-itranslate-visquad-aqg | 0 | null | transformers | 35,588 | Entry not found |
longcld/t5-small-squad-itranslate-aqg | 33df45fbff5d865d417ddb101ea19268d4676d0f | 2021-08-17T20:44:49.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | longcld | null | longcld/t5-small-squad-itranslate-aqg | 0 | null | transformers | 35,589 | Entry not found |
longge/test | 3188372dfc9687561e46291998e2656fff84d9e0 | 2021-11-02T06:36:03.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | longge | null | longge/test | 0 | null | transformers | 35,590 | Entry not found |
longjuanfen/model700 | 88b475ba828919d78debe0f1b1303c694bc1ef12 | 2021-11-02T16:24:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | longjuanfen | null | longjuanfen/model700 | 0 | null | transformers | 35,591 | Entry not found |
longjuanfen/model701 | aef1b4582976b7151e4a6242f5e81ad8d9213bdf | 2021-11-03T17:23:57.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | longjuanfen | null | longjuanfen/model701 | 0 | null | transformers | 35,592 | Entry not found |
longnhit07/distilbert-base-uncased-finetuned-imdb | 6fe0181a2aa074e52528005975958ec249b6613f | 2022-01-10T09:02:05.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | longnhit07 | null | longnhit07/distilbert-base-uncased-finetuned-imdb | 0 | null | transformers | 35,593 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7117 | 1.0 | 157 | 2.4977 |
| 2.5783 | 2.0 | 314 | 2.4241 |
| 2.5375 | 3.0 | 471 | 2.4358 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
lovellyweather/DialoGPT-medium-johnny | 9e30b225f829b6dde42881274a8d7c063b251817 | 2021-08-31T13:58:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | lovellyweather | null | lovellyweather/DialoGPT-medium-johnny | 0 | null | transformers | 35,594 | ---
tags:
- conversational
---
# Johnny DialoGPT Model |
lsy641/ESC_Blender_Strategy | 72c5d30a57217f7b44b7dae6d95230241335ec04 | 2021-07-05T14:23:34.000Z | [
"pytorch"
] | null | false | lsy641 | null | lsy641/ESC_Blender_Strategy | 0 | 1 | null | 35,595 | Entry not found |
lsy641/ESC_Blender_noStrategy | 793dfc0d06ce980674b226a745d6dffd09761a4c | 2021-07-05T14:22:05.000Z | [
"pytorch"
] | null | false | lsy641 | null | lsy641/ESC_Blender_noStrategy | 0 | null | null | 35,596 | Entry not found |
ltrctelugu/ltrc-albert | 4c5f82a4367c836af23a467350d5caf13cdbe819 | 2021-11-23T16:49:32.000Z | [
"pytorch",
"albert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ltrctelugu | null | ltrctelugu/ltrc-albert | 0 | null | transformers | 35,597 | hello
|
ltrctelugu/ltrc-roberta | 2e5cff54823703dfe5600f733e9d80d0320cee19 | 2021-10-17T16:45:03.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ltrctelugu | null | ltrctelugu/ltrc-roberta | 0 | null | transformers | 35,598 | RoBERTa trained on 8.8 Million Telugu Sentences
|
lucasnobre212/description-test | 5795790b439bbf641b2fc538fb1ee70741538f48 | 2021-12-29T15:17:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lucasnobre212 | null | lucasnobre212/description-test | 0 | null | transformers | 35,599 | Entry not found |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.