modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
justin871030/bert-base-uncased-goemotions-group | 563fe8872f8e18bf9e54873b5e85a6bb2227a7fa | 2022-01-08T09:56:30.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | justin871030 | null | justin871030/bert-base-uncased-goemotions-group | 1 | null | transformers | 29,800 | Entry not found |
kSaluja/autonlp-tele_red_data_model-585716433 | 8cba1cd27d0246f06388ecab85b3db7fe1278df2 | 2022-02-21T12:46:27.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:kSaluja/autonlp-data-tele_red_data_model",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | kSaluja | null | kSaluja/autonlp-tele_red_data_model-585716433 | 1 | null | transformers | 29,801 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- kSaluja/autonlp-data-tele_red_data_model
co2_eq_emissions: 2.379476355147211
---
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 585716433
- CO2 Emissions (in grams): 2.379476355147211
## Validation Metrics
- Loss: 0.15210922062397003
- Accuracy: 0.9724770642201835
- Precision: 0.950836820083682
- Recall: 0.9625838333921638
- F1: 0.9566742676723382
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kSaluja/autonlp-tele_red_data_model-585716433
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("kSaluja/autonlp-tele_red_data_model-585716433", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("kSaluja/autonlp-tele_red_data_model-585716433", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
kaesve/SciBERT_patent_reference_extraction | ab1be42fc29592f087cf15b65c75482c4a01ccee | 2021-01-12T14:59:37.000Z | [
"pytorch",
"arxiv:2101.01039",
"transformers"
] | null | false | kaesve | null | kaesve/SciBERT_patent_reference_extraction | 1 | null | transformers | 29,802 | # Reference extraction in patents
This repository contains a finetuned SciBERT model that can extract references to scientific literature from patents.
See https://github.com/kaesve/patent-citation-extraction and https://arxiv.org/abs/2101.01039 for more information.
|
kagennotsuki/DialoGPT-medium-radion | a3e8a9cf8016ba330f989699c7e8e4211c167af4 | 2021-09-10T04:41:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | kagennotsuki | null | kagennotsuki/DialoGPT-medium-radion | 1 | null | transformers | 29,803 | ---
tags:
- conversational
---
#Radion DialoGPT Model |
kaggleodin/distilbert-base-uncased-finetuned-squad | 9008e98ccd2f1018b1ca4ec9bbec13cc35b6353b | 2021-11-22T04:08:36.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | kaggleodin | null | kaggleodin/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 29,804 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2291 | 1.0 | 5533 | 1.1581 |
| 0.9553 | 2.0 | 11066 | 1.1249 |
| 0.7767 | 3.0 | 16599 | 1.1639 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
kapilkd13/xls-r-hi-test | 1a708a34bdd61cbb89ea5be65df5071b274baec1 | 2022-03-24T11:55:50.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | kapilkd13 | null | kapilkd13/xls-r-hi-test | 1 | null | transformers | 29,805 | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- robust-speech-event
- generated_from_trainer
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: ''
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: hi
metrics:
- name: Test WER
type: wer
value: 38.18
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7346
- Wer: 1.0479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.36 | 400 | 1.4595 | 1.0039 |
| 4.7778 | 2.71 | 800 | 0.8082 | 1.0115 |
| 0.6408 | 4.07 | 1200 | 0.7032 | 1.0079 |
| 0.3937 | 5.42 | 1600 | 0.6889 | 1.0433 |
| 0.3 | 6.78 | 2000 | 0.6820 | 1.0069 |
| 0.3 | 8.14 | 2400 | 0.6670 | 1.0196 |
| 0.226 | 9.49 | 2800 | 0.7216 | 1.0422 |
| 0.197 | 10.85 | 3200 | 0.7669 | 1.0534 |
| 0.165 | 12.2 | 3600 | 0.7517 | 1.0200 |
| 0.1486 | 13.56 | 4000 | 0.7125 | 1.0357 |
| 0.1486 | 14.92 | 4400 | 0.7447 | 1.0347 |
| 0.122 | 16.27 | 4800 | 0.6899 | 1.0440 |
| 0.1069 | 17.63 | 5200 | 0.7212 | 1.0350 |
| 0.0961 | 18.98 | 5600 | 0.7417 | 1.0408 |
| 0.086 | 20.34 | 6000 | 0.7402 | 1.0356 |
| 0.086 | 21.69 | 6400 | 0.7761 | 1.0420 |
| 0.0756 | 23.05 | 6800 | 0.7346 | 1.0369 |
| 0.0666 | 24.41 | 7200 | 0.7506 | 1.0449 |
| 0.0595 | 25.76 | 7600 | 0.7319 | 1.0476 |
| 0.054 | 27.12 | 8000 | 0.7346 | 1.0479 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
karthik19967829/XLM-R-ar-model | 3b66431518d57148341c93d2462388af05376367 | 2022-02-03T08:14:18.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | karthik19967829 | null | karthik19967829/XLM-R-ar-model | 1 | null | transformers | 29,806 | Entry not found |
karthik19967829/XLM-R-en-model | 596e564b3bc65774835d7af80e80a2695a4b5b7e | 2022-02-03T08:22:14.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | karthik19967829 | null | karthik19967829/XLM-R-en-model | 1 | null | transformers | 29,807 | Entry not found |
katrin-kc/dummy-model | 8660d3151eec5d4aa6a53951caf706b583788274 | 2022-01-26T11:53:44.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | katrin-kc | null | katrin-kc/dummy-model | 1 | null | transformers | 29,808 | Entry not found |
kdo6301/DongwoongKim-test-model | 45e0b7c5254298ccf1672f79a3c2c0c85cf3ae38 | 2022-02-11T14:20:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | kdo6301 | null | kdo6301/DongwoongKim-test-model | 1 | null | transformers | 29,809 | Entry not found |
kenlevine/distilbert-base-uncased-finetuned-squad | 89dc60e4764c4e3094043d54d120799e7142fe24 | 2021-11-30T18:04:35.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | kenlevine | null | kenlevine/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 29,810 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
khady/wolof-ASR | 0122b1be76d7032209469522a3adc4f644717f1b | 2022-02-14T16:56:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | khady | null | khady/wolof-ASR | 1 | null | transformers | 29,811 | |
khursani8/distilgpt2-finetuned-wikitext2 | 4a39e4c0e07dd3b85a21e152f170ad9ec554e037 | 2021-12-28T18:10:00.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | khursani8 | null | khursani8/distilgpt2-finetuned-wikitext2 | 1 | null | transformers | 29,812 | Entry not found |
kika2000/wav2vec2-large-xls-r-300m-kika10 | 3ed906c782c2106d0b0892ee78b56774204e34b3 | 2022-01-21T00:02:17.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | kika2000 | null | kika2000/wav2vec2-large-xls-r-300m-kika10 | 1 | null | transformers | 29,813 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-georgian2-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-georgian2-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4317
- Wer: 0.4280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7071 | 4.76 | 400 | 0.6897 | 0.7844 |
| 0.2908 | 9.52 | 800 | 0.4630 | 0.5582 |
| 0.1392 | 14.29 | 1200 | 0.4501 | 0.5006 |
| 0.0977 | 19.05 | 1600 | 0.4593 | 0.4755 |
| 0.075 | 23.81 | 2000 | 0.4340 | 0.4401 |
| 0.0614 | 28.57 | 2400 | 0.4317 | 0.4280 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
kika2000/wav2vec2-large-xls-r-300m-kika4_my-colab | f65da4879cc13402ca124bb685875268ca39ea19 | 2022-01-28T01:03:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | kika2000 | null | kika2000/wav2vec2-large-xls-r-300m-kika4_my-colab | 1 | null | transformers | 29,814 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-kika4_my-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kika4_my-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 70
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
kika2000/wav2vec2-large-xls-r-300m-kika5_my-colab | 805ad0abd616e6217a555e8eaf56c4c7a9ba09c0 | 2022-01-29T12:28:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | kika2000 | null | kika2000/wav2vec2-large-xls-r-300m-kika5_my-colab | 1 | null | transformers | 29,815 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-kika5_my-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kika5_my-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3860
- Wer: 0.3505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.0007 | 4.82 | 400 | 0.6696 | 0.8283 |
| 0.2774 | 9.64 | 800 | 0.4231 | 0.5476 |
| 0.1182 | 14.46 | 1200 | 0.4253 | 0.5102 |
| 0.0859 | 19.28 | 1600 | 0.4600 | 0.4866 |
| 0.0693 | 24.1 | 2000 | 0.4030 | 0.4533 |
| 0.0611 | 28.92 | 2400 | 0.4189 | 0.4412 |
| 0.0541 | 33.73 | 2800 | 0.4272 | 0.4380 |
| 0.0478 | 38.55 | 3200 | 0.4537 | 0.4505 |
| 0.0428 | 43.37 | 3600 | 0.4349 | 0.4181 |
| 0.038 | 48.19 | 4000 | 0.4562 | 0.4199 |
| 0.0345 | 53.01 | 4400 | 0.4209 | 0.4310 |
| 0.0316 | 57.83 | 4800 | 0.4336 | 0.4058 |
| 0.0288 | 62.65 | 5200 | 0.4004 | 0.3920 |
| 0.025 | 67.47 | 5600 | 0.4115 | 0.3857 |
| 0.0225 | 72.29 | 6000 | 0.4296 | 0.3948 |
| 0.0182 | 77.11 | 6400 | 0.3963 | 0.3772 |
| 0.0165 | 81.93 | 6800 | 0.3921 | 0.3687 |
| 0.0152 | 86.75 | 7200 | 0.3969 | 0.3592 |
| 0.0133 | 91.57 | 7600 | 0.3803 | 0.3527 |
| 0.0118 | 96.39 | 8000 | 0.3860 | 0.3505 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
kingabzpro/wav2vec2-large-xls-r-1b-Irish | a606fcf82b75e992321bfe44b79dc7e8fe789d77 | 2022-03-24T11:52:37.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ga-IE",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | kingabzpro | null | kingabzpro/wav2vec2-large-xls-r-1b-Irish | 1 | null | transformers | 29,816 | ---
language:
- ga-IE
license: apache-2.0
tags:
- automatic-speech-recognition
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
model-index:
- name: wav2vec2-large-xls-r-1b-Irish-Abid
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice ga-IE
args: ga-IE
metrics:
- type: wer
value: 38.45
name: Test WER With LM
- type: cer
value: 16.52
name: Test CER With LM
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-Irish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3599
- Wer: 0.4236
- Cer: 0.1768
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id kingabzpro/wav2vec2-large-xls-r-1b-Irish --dataset mozilla-foundation/common_voice_8_0 --config ga-IE --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "kingabzpro/wav2vec2-large-xls-r-1b-Irish"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ga-IE", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 6.3955 | 12.48 | 100 | 2.9897 | 1.0 | 1.0 |
| 2.3811 | 24.97 | 200 | 1.2304 | 0.7140 | 0.3106 |
| 1.0476 | 37.48 | 300 | 1.0661 | 0.5597 | 0.2407 |
| 0.7014 | 49.97 | 400 | 1.1788 | 0.4799 | 0.1947 |
| 0.4409 | 62.48 | 500 | 1.2649 | 0.4658 | 0.1997 |
| 0.4839 | 74.97 | 600 | 1.3259 | 0.4450 | 0.1868 |
| 0.3643 | 87.48 | 700 | 1.3506 | 0.4312 | 0.1760 |
| 0.3468 | 99.97 | 800 | 1.3599 | 0.4236 | 0.1768 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
kingabzpro/wav2vec2-large-xls-r-300m-Tatar | 1106e4ea122cd413aa004b0b45f554f91281494f | 2022-03-24T11:58:22.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tt",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | kingabzpro | null | kingabzpro/wav2vec2-large-xls-r-300m-Tatar | 1 | null | transformers | 29,817 | ---
language:
- tt
license: apache-2.0
tags:
- automatic-speech-recognition
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
model-index:
- name: wav2vec2-large-xls-r-300m-Tatar
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice tt
args: tt
metrics:
- type: wer
value: 42.71
name: Test WER With LM
- type: cer
value: 11.18
name: Test CER With LM
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Tatar
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5068
- Wer: 0.4263
- Cer: 0.1117
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id kingabzpro/wav2vec2-large-xls-r-300m-Tatar --dataset mozilla-foundation/common_voice_8_0 --config tt --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "kingabzpro/wav2vec2-large-xls-r-300m-Tatar"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "tt", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 8.4116 | 12.19 | 500 | 3.4118 | 1.0 | 1.0 |
| 2.5829 | 24.39 | 1000 | 0.7150 | 0.6151 | 0.1582 |
| 0.4492 | 36.58 | 1500 | 0.5378 | 0.4577 | 0.1210 |
| 0.3007 | 48.77 | 2000 | 0.5068 | 0.4263 | 0.1117 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
kiyoung2/koelectra-small | 3fdd6a8e51c780b7b001740d99e690863be58c2a | 2021-12-09T19:03:35.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | kiyoung2 | null | kiyoung2/koelectra-small | 1 | null | transformers | 29,818 | Entry not found |
kizunasunhy/distilbert-base-uncased-finetuned-ner | 33ae7d9182eec9ebe6530a0586d3bc6b20d73c94 | 2021-10-15T09:16:11.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | kizunasunhy | null | kizunasunhy/distilbert-base-uncased-finetuned-ner | 1 | null | transformers | 29,819 | Entry not found |
knightbat/harry-potter | 5ad3f7860ab40a2596c8af45d89e83ba7da01697 | 2021-09-18T20:41:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | knightbat | null | knightbat/harry-potter | 1 | null | transformers | 29,820 | ---
tags:
- conversational
---
#Harry Potter model |
knkarthick/TRIAL_RUN | 3a627766b1677e1703c0b558143e44f2fd17b3d8 | 2021-09-17T11:53:49.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | knkarthick | null | knkarthick/TRIAL_RUN | 1 | null | transformers | 29,821 | Entry not found |
knlu1016/albert-base-v2-finetuned-squad | 6f04ead53ee9d037db76192c83c106a920545d39 | 2021-12-10T00:08:26.000Z | [
"pytorch",
"tensorboard",
"albert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | knlu1016 | null | knlu1016/albert-base-v2-finetuned-squad | 1 | null | transformers | 29,822 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: albert-base-v2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8695 | 1.0 | 5540 | 0.9092 |
| 0.6594 | 2.0 | 11080 | 0.9148 |
| 0.5053 | 3.0 | 16620 | 0.9641 |
| 0.3477 | 4.0 | 22160 | 1.1607 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
koala/bert-large-cased-en | 50f573214d09cb4c7ea7306aabe514ad139878f6 | 2021-11-29T20:05:04.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/bert-large-cased-en | 1 | null | transformers | 29,823 | Entry not found |
koala/bert-large-uncased-bn | 9e9bbc00f5a4361e5b1c6c09d2f3f2c2b108cec5 | 2021-12-21T13:03:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/bert-large-uncased-bn | 1 | null | transformers | 29,824 | Entry not found |
koala/bert-large-uncased-de | 67064e8fe4e42b43586514a2217eaf3ffc644a73 | 2021-11-30T07:55:03.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/bert-large-uncased-de | 1 | null | transformers | 29,825 | Entry not found |
koala/bert-large-uncased-en | 1ca0876dcace6dc0705d4d07ef2558461ae7433c | 2021-11-29T19:08:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/bert-large-uncased-en | 1 | null | transformers | 29,826 | Entry not found |
koala/bert-large-uncased-fa | d3f97a2d531d692e3223898856bfa8d35a60b7f0 | 2021-12-17T07:44:22.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/bert-large-uncased-fa | 1 | null | transformers | 29,827 | Entry not found |
koala/bert-large-uncased-hi | bbe128e8575539bcb270a91acec9d1687f09d738 | 2021-12-17T07:52:47.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/bert-large-uncased-hi | 1 | null | transformers | 29,828 | Entry not found |
koala/bert-large-uncased-zh | dcf8efbde5f00ab076fbc4b73799e583999476a8 | 2021-12-10T08:47:42.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/bert-large-uncased-zh | 1 | null | transformers | 29,829 | Entry not found |
koala/xlm-roberta-large-fa | 58643f6ace95a6056c19a28acee89ab04deb0617 | 2021-12-21T13:08:02.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/xlm-roberta-large-fa | 1 | null | transformers | 29,830 | Entry not found |
kobkrit/wangchanberta-ner-2 | fd7575859bbea7bb187fa6fa61509dd7ea2d3019 | 2022-02-15T03:46:11.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | kobkrit | null | kobkrit/wangchanberta-ner-2 | 1 | null | transformers | 29,831 | Entry not found |
kornesh/roberta-large-wechsel-tamil | 3e5fac40ca5bdeefb9c9a1583191e16cbab7b3d9 | 2021-11-14T04:40:24.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | kornesh | null | kornesh/roberta-large-wechsel-tamil | 1 | null | transformers | 29,832 | Entry not found |
kornwtp/sup-consert-large | 0b3976d1e662917c970d3bf6102ee7c8f025158f | 2021-12-25T05:46:59.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | kornwtp | null | kornwtp/sup-consert-large | 1 | null | transformers | 29,833 | Entry not found |
kornwtp/unsup-consert-large | 9cd453330c5dccf272066c8e7bb4f46cfe5491d1 | 2021-12-25T05:40:02.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | kornwtp | null | kornwtp/unsup-consert-large | 1 | null | transformers | 29,834 | Entry not found |
kris/DialoGPT-small-spock3 | 1cede3bb4cae40548995ec6c362c489d606a65c4 | 2021-09-18T17:33:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | kris | null | kris/DialoGPT-small-spock3 | 1 | null | transformers | 29,835 | ---
tags:
- conversational
---
#Spock model |
ksmcg/push_hub_test | bc529d06fc8c97dc5050eb3e30e08a891b5e07ef | 2021-08-23T12:56:57.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ksmcg | null | ksmcg/push_hub_test | 1 | null | transformers | 29,836 | Entry not found |
kwang1993/wav2vec2-base-timit-demo | 0f87226db754a22e3a98ad153484c5a58cd0e60f | 2021-12-21T04:54:44.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | kwang1993 | null | kwang1993/wav2vec2-base-timit-demo | 1 | null | transformers | 29,837 | https://huggingface.co/blog/fine-tune-wav2vec2-english
Use the processor from https://huggingface.co/facebook/wav2vec2-base |
kwang2049/TSDAE-cqadupstack2nli_stsb | 7caae3852b6916417df296271bf5145a4c56eebb | 2021-10-25T16:14:19.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"transformers"
] | feature-extraction | false | kwang2049 | null | kwang2049/TSDAE-cqadupstack2nli_stsb | 1 | null | transformers | 29,838 | # kwang2049/TSDAE-cqadupstack2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain cqadupstack. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on cqadupstack with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
l41n/c3rbs | 24b89edfb82103c67838d61b8919b32ce0e2cd14 | 2021-08-24T02:40:57.000Z | [
"pytorch",
"conversational"
] | conversational | false | l41n | null | l41n/c3rbs | 1 | null | null | 29,839 | ---
tags:
- conversational
---
# <3 |
lagodw/plotly_gpt | daa1a13bc6416269e62245882358bf42230f973d | 2021-10-03T21:52:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | lagodw | null | lagodw/plotly_gpt | 1 | null | transformers | 29,840 | Entry not found |
lagodw/redditbot | 72c28ef74178fb13aaa91e625136d65406253ca8 | 2021-08-20T05:14:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | lagodw | null | lagodw/redditbot | 1 | null | transformers | 29,841 | Entry not found |
lagodw/redditbot_gpt2 | 294a88532e56b2d82b0cb1cb86abad6aab97a7cf | 2021-09-10T02:01:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | lagodw | null | lagodw/redditbot_gpt2 | 1 | null | transformers | 29,842 | Entry not found |
lagodw/redditbot_gpt2_v2 | d56603157b5621b697b9379a7ccc90e4551e95a5 | 2021-09-19T07:34:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | lagodw | null | lagodw/redditbot_gpt2_v2 | 1 | null | transformers | 29,843 | Entry not found |
leeeki/roberta-large_Explainable | 7979ed64d811bd9f5a1223628077bfda6eb13c0b | 2022-02-19T13:16:29.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | leeeki | null | leeeki/roberta-large_Explainable | 1 | null | transformers | 29,844 | Entry not found |
leolin12345/ft-lr-cu | 8f539b20b51ab692aa733332546714cf4dbfa1fe | 2022-02-24T22:29:14.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | leolin12345 | null | leolin12345/ft-lr-cu | 1 | null | transformers | 29,845 | |
lewtun/distilbert-base-uncased-finetuned-imdb-accelerate | fa4ee8880f431cf14bf0b8b5b4d5b9297d009841 | 2021-10-04T21:03:16.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | lewtun | null | lewtun/distilbert-base-uncased-finetuned-imdb-accelerate | 1 | null | transformers | 29,846 | Entry not found |
lewtun/distilbert-base-uncased-finetuned-squad-d5716d28 | d52f97cfce2a6c9bd0c43e8656263f4b7f278513 | 2021-09-30T18:36:45.000Z | [
"pytorch",
"en",
"dataset:squad",
"arxiv:1910.01108",
"question-answering",
"license:apache-2.0"
] | question-answering | false | lewtun | null | lewtun/distilbert-base-uncased-finetuned-squad-d5716d28 | 1 | 1 | null | 29,847 | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
lewtun/distilbert-base-uncased-finetuned-squad-v1 | 7642ba53217922c326598ce47ae3360fd8ef27ee | 2021-01-31T11:55:20.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | lewtun | null | lewtun/distilbert-base-uncased-finetuned-squad-v1 | 1 | null | transformers | 29,848 | Entry not found |
lewtun/dummy-model | b4ef4abc8cbb57f16007704cd97ea436d1914153 | 2021-07-07T08:37:39.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | lewtun | null | lewtun/dummy-model | 1 | null | transformers | 29,849 | Entry not found |
lewtun/metnet-test-4 | 03a2e3eeb1630277bae5c625d7945272928a80bd | 2021-09-06T11:00:39.000Z | [
"pytorch",
"transformers",
"satflow",
"license:mit"
] | null | false | lewtun | null | lewtun/metnet-test-4 | 1 | null | transformers | 29,850 | ---
license: mit
tags:
- satflow
---
# Model Card for MetNet
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
lewtun/perceriver-test-01 | f421ebcea62d9c1296d2e144e8b255ee72b684f3 | 2021-09-14T14:07:26.000Z | [
"pytorch",
"transformers",
"satflow",
"forecasting",
"timeseries",
"remote-sensing",
"license:mit"
] | null | false | lewtun | null | lewtun/perceriver-test-01 | 1 | null | transformers | 29,851 | ---
license: mit
tags:
- satflow
- forecasting
- timeseries
- remote-sensing
---
# Perceiver
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
lg/fexp_3 | 551b8017dab44820b84e05acdac4a74bde80af15 | 2021-05-01T06:03:40.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | lg | null | lg/fexp_3 | 1 | null | transformers | 29,852 | Entry not found |
lg/fexp_4 | df7dc4afbc29060e86e5b0b3933c1ead1f0a6244 | 2021-05-01T17:25:46.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | lg | null | lg/fexp_4 | 1 | null | transformers | 29,853 | Entry not found |
lg/fexp_5 | 3f555517b2dca438c05d80505e6a511abe13b106 | 2021-05-01T23:26:00.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | lg | null | lg/fexp_5 | 1 | null | transformers | 29,854 | Entry not found |
lg/ghpy_20k | e39db64349d4cebf48ec8fd142e39e71ec1ce2e8 | 2021-07-20T23:55:56.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | lg | null | lg/ghpy_20k | 1 | 2 | transformers | 29,855 | **This model is provided with no guarantees whatsoever; use at your own risk.**
This is a Neo2.7B model fine tuned on github data scraped by an EleutherAI member (filtered for python-only) for 20k steps. A better code model is coming soon™ (hopefully, maybe); this model was created mostly as a test of infrastructure code. |
lg/ghpy_4k | 54f17c5a7f0a0bccd088ac66ab5a60fab095306d | 2021-05-14T22:15:12.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | lg | null | lg/ghpy_4k | 1 | null | transformers | 29,856 | Entry not found |
lg/ghpy_8k | c768029bfb46696e5e34b664ddfc68d734ef781a | 2021-05-15T15:58:11.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | lg | null | lg/ghpy_8k | 1 | null | transformers | 29,857 | Entry not found |
lgris/WavLM-large-CORAA-pt | b93637676546c6f75ed2e9b37d16dafbfc3493cb | 2022-02-10T23:21:45.000Z | [
"pytorch",
"wavlm",
"automatic-speech-recognition",
"pt",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lgris | null | lgris/WavLM-large-CORAA-pt | 1 | null | transformers | 29,858 | ---
language:
- pt
license: apache-2.0
tags:
- generated_from_trainer
- pt
model-index:
- name: WavLM-large-CORAA-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# WavLM-large-CORAA-pt
This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on [CORAA dataset](https://github.com/nilc-nlp/CORAA).
It achieves the following results on the evaluation set:
- Loss: 0.6144
- Wer: 0.3840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.04 | 1000 | 1.9230 | 0.9960 |
| 5.153 | 0.08 | 2000 | 1.3733 | 0.8444 |
| 5.153 | 0.13 | 3000 | 1.1992 | 0.7362 |
| 1.367 | 0.17 | 4000 | 1.1289 | 0.6957 |
| 1.367 | 0.21 | 5000 | 1.0357 | 0.6470 |
| 1.1824 | 0.25 | 6000 | 1.0216 | 0.6201 |
| 1.1824 | 0.29 | 7000 | 0.9338 | 0.6036 |
| 1.097 | 0.33 | 8000 | 0.9149 | 0.5760 |
| 1.097 | 0.38 | 9000 | 0.8885 | 0.5541 |
| 1.0254 | 0.42 | 10000 | 0.8678 | 0.5366 |
| 1.0254 | 0.46 | 11000 | 0.8349 | 0.5323 |
| 0.9782 | 0.5 | 12000 | 0.8230 | 0.5155 |
| 0.9782 | 0.54 | 13000 | 0.8245 | 0.5049 |
| 0.9448 | 0.59 | 14000 | 0.7802 | 0.4990 |
| 0.9448 | 0.63 | 15000 | 0.7650 | 0.4900 |
| 0.9092 | 0.67 | 16000 | 0.7665 | 0.4796 |
| 0.9092 | 0.71 | 17000 | 0.7568 | 0.4795 |
| 0.8764 | 0.75 | 18000 | 0.7403 | 0.4615 |
| 0.8764 | 0.8 | 19000 | 0.7219 | 0.4644 |
| 0.8498 | 0.84 | 20000 | 0.7180 | 0.4502 |
| 0.8498 | 0.88 | 21000 | 0.7017 | 0.4436 |
| 0.8278 | 0.92 | 22000 | 0.6992 | 0.4395 |
| 0.8278 | 0.96 | 23000 | 0.7021 | 0.4329 |
| 0.8077 | 1.0 | 24000 | 0.6892 | 0.4265 |
| 0.8077 | 1.05 | 25000 | 0.6940 | 0.4248 |
| 0.7486 | 1.09 | 26000 | 0.6767 | 0.4202 |
| 0.7486 | 1.13 | 27000 | 0.6734 | 0.4150 |
| 0.7459 | 1.17 | 28000 | 0.6650 | 0.4152 |
| 0.7459 | 1.21 | 29000 | 0.6559 | 0.4078 |
| 0.7304 | 1.26 | 30000 | 0.6536 | 0.4088 |
| 0.7304 | 1.3 | 31000 | 0.6537 | 0.4025 |
| 0.7183 | 1.34 | 32000 | 0.6462 | 0.4008 |
| 0.7183 | 1.38 | 33000 | 0.6381 | 0.3973 |
| 0.7059 | 1.42 | 34000 | 0.6266 | 0.3930 |
| 0.7059 | 1.46 | 35000 | 0.6280 | 0.3921 |
| 0.6983 | 1.51 | 36000 | 0.6248 | 0.3897 |
| 0.6983 | 1.55 | 37000 | 0.6275 | 0.3872 |
| 0.6892 | 1.59 | 38000 | 0.6199 | 0.3852 |
| 0.6892 | 1.63 | 39000 | 0.6180 | 0.3842 |
| 0.691 | 1.67 | 40000 | 0.6144 | 0.3840 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
lgris/bp-commonvoice100-xlsr | 5f31471894f0f6a5b8d2187e0bb3c919841e65bd | 2021-11-27T21:04:12.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"PyTorch",
"license:apache-2.0"
] | automatic-speech-recognition | false | lgris | null | lgris/bp-commonvoice100-xlsr | 1 | null | transformers | 29,859 | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
---
# commonvoice100-xlsr: Wav2vec 2.0 with Common Voice Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [Common Voice 7.0](https://commonvoice.mozilla.org/pt) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | | -- | 5.4h |
| Common Voice | 37.8h | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total | | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| commonvoice\_100 (demonstration below) |0.088 | 0.126 | 0.121 | 0.173 | 0.177 | 0.424 | 0.145 | 0.179 |
| commonvoice\_100 + 4-gram (demonstration below) |0.057 | 0.095 | 0.076 | 0.138 | 0.146 | 0.382 | 0.130 | 0.146|
## Demonstration
```python
MODEL_NAME = "lgris/commonvoice100-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.08868880057404624
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.12601035333655114
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.12149621212121209
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.173594387890256
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.1775290775992294
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.4245704568241374
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.14541801948051947
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.05764220069547976
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.09569130510737103
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.07688131313131312
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.13814768877494732
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.14652459944499036
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.38196090002435623
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.13054112554112554
|
lgris/bp-mls100-xlsr | c0ab58085e26fa3802b1d259f64e2b51cca933ad | 2022-01-02T23:54:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"PyTorch",
"license:apache-2.0"
] | automatic-speech-recognition | false | lgris | null | lgris/bp-mls100-xlsr | 1 | null | transformers | 29,860 | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
---
# mls100-xlsr: Wav2vec 2.0 with MLS Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [Multilingual Librispeech in Portuguese (MLS)](http://www.openslr.org/94/) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | | -- | 5.4h |
| Common Voice | | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | 161h | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total | 161h | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| mls100 (demonstration below) | 0.192 | 0.260 | 0.162 | 0.163 | 0.268 | 0.492 | 0.268 | 0.258 |
| mls100 + 4-gram (demonstration below) | 0.087 | 0.173 | 0.077 | 0.126 | 0.245 | 0.415 | 0.218 | 0.191 |
## Demonstration
```python
MODEL_NAME = "lgris/mls100-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
```python
%cd bp_dataset/
```
/content/bp_dataset
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.192586382955233
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.2604333640312866
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.16259469696969692
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.16343014413283674
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.2682880375992515
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.49252836581485837
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.2686972402597403
### Tests with LM
```python
!rm -rf ~/.cache
%cd /content/
# !gdown --id '1d13Onxy9ubmJZORZ8FO2vnsnl36QMiUc' # trained with wikipedia;
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
%cd bp_dataset/
```
/content/bp_dataset
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.0878818926974661
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.173303354010221
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.07691919191919189
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.12624377042839321
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.24545473435776916
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.4156272215612955
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.21832386363636366
|
lgris/bp-tedx100-xlsr | 5a584f0a42628cf030bd6f1c802e9ac73a6ff468 | 2021-11-27T21:12:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"PyTorch",
"license:apache-2.0"
] | automatic-speech-recognition | false | lgris | null | lgris/bp-tedx100-xlsr | 1 | null | transformers | 29,861 | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
---
# tedx100-xlsr: Wav2vec 2.0 with TEDx Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [TEDx multilingual in Portuguese](http://www.openslr.org/100) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | | -- | 5.4h |
| Common Voice | | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | 148.8h| -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total |148.8h | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| tedx\_100 (demonstration below) |0.138 | 0.369 | 0.169 | 0.165 | 0.794 | 0.222 | 0.395 | 0.321|
| tedx\_100 + 4-gram (demonstration below) |0.123 | 0.414 | 0.171 | 0.152 | 0.982 | 0.215 | 0.395 | 0.350|
## Demonstration
```python
MODEL_NAME = "lgris/tedx100-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.13846663354859937
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.36960721735520236
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.16941287878787875
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.16586103382107384
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.7943364822145216
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.22221476803982182
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.39486066017315996
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.12338749517028079
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.4146185693398481
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.17142676767676762
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.15212081808962674
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.982518441309493
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.21567860841157235
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.3952218614718614
|
lgris/sew-tiny-pt | c58850627b1f7b2ed8d9a7f487c40449ec0a7dde | 2021-12-30T17:37:50.000Z | [
"pytorch",
"sew",
"feature-extraction",
"pt",
"arxiv:2109.06870",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | lgris | null | lgris/sew-tiny-pt | 1 | 1 | transformers | 29,862 | ---
language: pt
tags:
- speech
license: apache-2.0
---
# SEW-tiny-pt
This is a pretrained version of [SEW tiny by ASAPP Research](https://github.com/asappresearch/sew) trained over Brazilian Portuguese audio.
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWForCTC`.
|
lgris/wav2vec2-xls-r-1b-portuguese-CORAA-3 | d5c6d89fcd434f1b1a3475dbcd5f507c33f2ab57 | 2022-03-24T11:55:55.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lgris | null | lgris/wav2vec2-xls-r-1b-portuguese-CORAA-3 | 1 | null | transformers | 29,863 | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- pt
- robust-speech-event
- hf-asr-leaderboard
model-index:
- name: wav2vec2-xls-r-1b-portuguese-CORAA-3
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: pt
metrics:
- name: Test WER
type: wer
value: 71.67
- name: Test CER
type: cer
value: 30.64
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Test WER
type: wer
value: 68.18
- name: Test CER
type: cer
value: 28.34
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 56.76
- name: Test CER
type: cer
value: 23.7
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-portuguese-CORAA-3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on [CORAA dataset](https://github.com/nilc-nlp/CORAA).
It achieves the following results on the evaluation set:
- Loss: 1.0029
- Wer: 0.6020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- training_steps: 30000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.0169 | 0.21 | 5000 | 1.9582 | 0.9283 |
| 1.8561 | 0.42 | 10000 | 1.6144 | 0.8554 |
| 1.6823 | 0.63 | 15000 | 1.4165 | 0.7710 |
| 1.52 | 0.84 | 20000 | 1.2441 | 0.7289 |
| 1.3757 | 1.05 | 25000 | 1.1061 | 0.6491 |
| 1.2377 | 1.26 | 30000 | 1.0029 | 0.6020 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
lgris/wav2vec2-xls-r-gn-cv7 | 47c960412a1b1e1a572012712441a0e1406923b3 | 2022-03-24T11:58:25.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gn",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lgris | null | lgris/wav2vec2-xls-r-gn-cv7 | 1 | null | transformers | 29,864 | ---
language:
- gn
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- gn
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xls-r-gn-cv7
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Validation WER
type: wer
value: 73.02
- name: Validation CER
type: cer
value: 17.79
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: gn
metrics:
- name: Test WER
type: wer
value: 62.65
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-gn-cv7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7197
- Wer: 0.7434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 13000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 3.4669 | 6.24 | 100 | 3.3003 | 1.0 |
| 3.3214 | 12.48 | 200 | 3.2090 | 1.0 |
| 3.1619 | 18.73 | 300 | 2.6322 | 1.0 |
| 1.751 | 24.97 | 400 | 1.4089 | 0.9803 |
| 0.7997 | 31.24 | 500 | 0.9996 | 0.9211 |
| 0.4996 | 37.48 | 600 | 0.9879 | 0.8553 |
| 0.3677 | 43.73 | 700 | 0.9543 | 0.8289 |
| 0.2851 | 49.97 | 800 | 1.0627 | 0.8487 |
| 0.2556 | 56.24 | 900 | 1.0933 | 0.8355 |
| 0.2268 | 62.48 | 1000 | 0.9191 | 0.8026 |
| 0.1914 | 68.73 | 1100 | 0.9582 | 0.7961 |
| 0.1749 | 74.97 | 1200 | 1.0502 | 0.8092 |
| 0.157 | 81.24 | 1300 | 0.9998 | 0.7632 |
| 0.1505 | 87.48 | 1400 | 1.0076 | 0.7303 |
| 0.1278 | 93.73 | 1500 | 0.9321 | 0.75 |
| 0.1078 | 99.97 | 1600 | 1.0383 | 0.7697 |
| 0.1156 | 106.24 | 1700 | 1.0302 | 0.7763 |
| 0.1107 | 112.48 | 1800 | 1.0419 | 0.7763 |
| 0.091 | 118.73 | 1900 | 1.0694 | 0.75 |
| 0.0829 | 124.97 | 2000 | 1.0257 | 0.7829 |
| 0.0865 | 131.24 | 2100 | 1.2108 | 0.7368 |
| 0.0907 | 137.48 | 2200 | 1.0458 | 0.7697 |
| 0.0897 | 143.73 | 2300 | 1.1504 | 0.7895 |
| 0.0766 | 149.97 | 2400 | 1.1663 | 0.7237 |
| 0.0659 | 156.24 | 2500 | 1.1320 | 0.7632 |
| 0.0699 | 162.48 | 2600 | 1.2586 | 0.7434 |
| 0.0613 | 168.73 | 2700 | 1.1815 | 0.8158 |
| 0.0598 | 174.97 | 2800 | 1.3299 | 0.75 |
| 0.0577 | 181.24 | 2900 | 1.2035 | 0.7171 |
| 0.0576 | 187.48 | 3000 | 1.2134 | 0.7434 |
| 0.0518 | 193.73 | 3100 | 1.3406 | 0.7566 |
| 0.0524 | 199.97 | 3200 | 1.4251 | 0.75 |
| 0.0467 | 206.24 | 3300 | 1.3533 | 0.7697 |
| 0.0428 | 212.48 | 3400 | 1.2463 | 0.7368 |
| 0.0453 | 218.73 | 3500 | 1.4532 | 0.7566 |
| 0.0473 | 224.97 | 3600 | 1.3152 | 0.7434 |
| 0.0451 | 231.24 | 3700 | 1.2232 | 0.7368 |
| 0.0361 | 237.48 | 3800 | 1.2938 | 0.7171 |
| 0.045 | 243.73 | 3900 | 1.4148 | 0.7434 |
| 0.0422 | 249.97 | 4000 | 1.3786 | 0.7961 |
| 0.036 | 256.24 | 4100 | 1.4488 | 0.7697 |
| 0.0352 | 262.48 | 4200 | 1.2294 | 0.6776 |
| 0.0326 | 268.73 | 4300 | 1.2796 | 0.6974 |
| 0.034 | 274.97 | 4400 | 1.3805 | 0.7303 |
| 0.0305 | 281.24 | 4500 | 1.4994 | 0.7237 |
| 0.0325 | 287.48 | 4600 | 1.4330 | 0.6908 |
| 0.0338 | 293.73 | 4700 | 1.3091 | 0.7368 |
| 0.0306 | 299.97 | 4800 | 1.2174 | 0.7171 |
| 0.0299 | 306.24 | 4900 | 1.3527 | 0.7763 |
| 0.0287 | 312.48 | 5000 | 1.3651 | 0.7368 |
| 0.0274 | 318.73 | 5100 | 1.4337 | 0.7368 |
| 0.0258 | 324.97 | 5200 | 1.3831 | 0.6908 |
| 0.022 | 331.24 | 5300 | 1.3556 | 0.6974 |
| 0.021 | 337.48 | 5400 | 1.3836 | 0.7237 |
| 0.0241 | 343.73 | 5500 | 1.4352 | 0.7039 |
| 0.0229 | 349.97 | 5600 | 1.3904 | 0.7105 |
| 0.026 | 356.24 | 5700 | 1.4131 | 0.7171 |
| 0.021 | 362.48 | 5800 | 1.5426 | 0.6974 |
| 0.0191 | 368.73 | 5900 | 1.5960 | 0.7632 |
| 0.0227 | 374.97 | 6000 | 1.6240 | 0.7368 |
| 0.0204 | 381.24 | 6100 | 1.4301 | 0.7105 |
| 0.0175 | 387.48 | 6200 | 1.5554 | 0.75 |
| 0.0183 | 393.73 | 6300 | 1.6044 | 0.7697 |
| 0.0183 | 399.97 | 6400 | 1.5963 | 0.7368 |
| 0.016 | 406.24 | 6500 | 1.5679 | 0.7829 |
| 0.0178 | 412.48 | 6600 | 1.5928 | 0.7697 |
| 0.014 | 418.73 | 6700 | 1.7000 | 0.7632 |
| 0.0182 | 424.97 | 6800 | 1.5340 | 0.75 |
| 0.0148 | 431.24 | 6900 | 1.9274 | 0.7368 |
| 0.0148 | 437.48 | 7000 | 1.6437 | 0.7697 |
| 0.0173 | 443.73 | 7100 | 1.5468 | 0.75 |
| 0.0109 | 449.97 | 7200 | 1.6083 | 0.75 |
| 0.0167 | 456.24 | 7300 | 1.6732 | 0.75 |
| 0.0139 | 462.48 | 7400 | 1.5097 | 0.7237 |
| 0.013 | 468.73 | 7500 | 1.5947 | 0.7171 |
| 0.0128 | 474.97 | 7600 | 1.6260 | 0.7105 |
| 0.0166 | 481.24 | 7700 | 1.5756 | 0.7237 |
| 0.0127 | 487.48 | 7800 | 1.4506 | 0.6908 |
| 0.013 | 493.73 | 7900 | 1.4882 | 0.7368 |
| 0.0125 | 499.97 | 8000 | 1.5589 | 0.7829 |
| 0.0141 | 506.24 | 8100 | 1.6328 | 0.7434 |
| 0.0115 | 512.48 | 8200 | 1.6586 | 0.7434 |
| 0.0117 | 518.73 | 8300 | 1.6043 | 0.7105 |
| 0.009 | 524.97 | 8400 | 1.6508 | 0.7237 |
| 0.0108 | 531.24 | 8500 | 1.4507 | 0.6974 |
| 0.011 | 537.48 | 8600 | 1.5942 | 0.7434 |
| 0.009 | 543.73 | 8700 | 1.8121 | 0.7697 |
| 0.0112 | 549.97 | 8800 | 1.6923 | 0.7697 |
| 0.0073 | 556.24 | 8900 | 1.7096 | 0.7368 |
| 0.0098 | 562.48 | 9000 | 1.7052 | 0.7829 |
| 0.0088 | 568.73 | 9100 | 1.6956 | 0.7566 |
| 0.0099 | 574.97 | 9200 | 1.4909 | 0.7171 |
| 0.0075 | 581.24 | 9300 | 1.6307 | 0.7697 |
| 0.0077 | 587.48 | 9400 | 1.6196 | 0.7961 |
| 0.0088 | 593.73 | 9500 | 1.6119 | 0.7566 |
| 0.0085 | 599.97 | 9600 | 1.4512 | 0.7368 |
| 0.0086 | 606.24 | 9700 | 1.5992 | 0.7237 |
| 0.0109 | 612.48 | 9800 | 1.4706 | 0.7368 |
| 0.0098 | 618.73 | 9900 | 1.3824 | 0.7171 |
| 0.0091 | 624.97 | 10000 | 1.4776 | 0.6974 |
| 0.0072 | 631.24 | 10100 | 1.4896 | 0.7039 |
| 0.0087 | 637.48 | 10200 | 1.5467 | 0.7368 |
| 0.007 | 643.73 | 10300 | 1.5493 | 0.75 |
| 0.0076 | 649.97 | 10400 | 1.5706 | 0.7303 |
| 0.0085 | 656.24 | 10500 | 1.5748 | 0.7237 |
| 0.0075 | 662.48 | 10600 | 1.5081 | 0.7105 |
| 0.0068 | 668.73 | 10700 | 1.4967 | 0.6842 |
| 0.0117 | 674.97 | 10800 | 1.4986 | 0.7105 |
| 0.0054 | 681.24 | 10900 | 1.5587 | 0.7303 |
| 0.0059 | 687.48 | 11000 | 1.5886 | 0.7171 |
| 0.0071 | 693.73 | 11100 | 1.5746 | 0.7171 |
| 0.0048 | 699.97 | 11200 | 1.6166 | 0.7237 |
| 0.0048 | 706.24 | 11300 | 1.6098 | 0.7237 |
| 0.0056 | 712.48 | 11400 | 1.5834 | 0.7237 |
| 0.0048 | 718.73 | 11500 | 1.5653 | 0.7171 |
| 0.0045 | 724.97 | 11600 | 1.6252 | 0.7237 |
| 0.0068 | 731.24 | 11700 | 1.6794 | 0.7171 |
| 0.0044 | 737.48 | 11800 | 1.6881 | 0.7039 |
| 0.008 | 743.73 | 11900 | 1.7393 | 0.75 |
| 0.0045 | 749.97 | 12000 | 1.6869 | 0.7237 |
| 0.0047 | 756.24 | 12100 | 1.7105 | 0.7303 |
| 0.0057 | 762.48 | 12200 | 1.7439 | 0.7303 |
| 0.004 | 768.73 | 12300 | 1.7871 | 0.7434 |
| 0.0061 | 774.97 | 12400 | 1.7812 | 0.7303 |
| 0.005 | 781.24 | 12500 | 1.7410 | 0.7434 |
| 0.0056 | 787.48 | 12600 | 1.7220 | 0.7303 |
| 0.0064 | 793.73 | 12700 | 1.7141 | 0.7434 |
| 0.0042 | 799.97 | 12800 | 1.7139 | 0.7368 |
| 0.0049 | 806.24 | 12900 | 1.7211 | 0.7434 |
| 0.0044 | 812.48 | 13000 | 1.7197 | 0.7434 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
li666/wav2vec2-large-xls-r-300m-zh-CN-colab | c0bf86ae29e64101f1add0b77cb3589fa5876d03 | 2021-12-13T11:30:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | li666 | null | li666/wav2vec2-large-xls-r-300m-zh-CN-colab | 1 | null | transformers | 29,865 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-zh-CN-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-zh-CN-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
liaad/ud_srl-pt_xlmr-large | 123fb87b8991706fc99d0b689d372ebbadcee246 | 2021-09-22T08:56:46.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"multilingual",
"pt",
"dataset:PropBank.Br",
"dataset:CoNLL-2012",
"dataset:Universal Dependencies",
"arxiv:2101.01213",
"transformers",
"xlm-roberta-large",
"semantic role labeling",
"finetuned",
"dependency parsing",
"license:apache-2.0"
] | feature-extraction | false | liaad | null | liaad/ud_srl-pt_xlmr-large | 1 | null | transformers | 29,866 | ---
language:
- multilingual
- pt
tags:
- xlm-roberta-large
- semantic role labeling
- finetuned
- dependency parsing
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
- Universal Dependencies
metrics:
- F1 Measure
---
# XLM-R large fine-tune in Portuguese Universal Dependencies and semantic role labeling
## Model description
This model is the [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large) fine-tuned first on the Universal Dependencies Portuguese dataset and then fine-tuned on the PropBank.Br data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/ud_srl-pt_xlmr-large")
model = AutoModel.from_pretrained("liaad/ud_srl-pt_xlmr-large")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
- The model was trained only for 10 epochs in the Universal Dependencies dataset.
## Training procedure
The model was trained on the Universal Dependencies Portuguese dataset; then on the CoNLL formatted OntoNotes v5.0; then on Portuguese semantic role labeling data (PropBank.Br) using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
life4free96/DialogGPT-med-TeiaMoranta3 | f9bd49b0a9b0289bdb50f454b02767bffa5d3808 | 2021-11-14T20:06:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | life4free96 | null | life4free96/DialogGPT-med-TeiaMoranta3 | 1 | null | transformers | 29,867 | ---
tags:
- conversational
---
|
ligolab/DxRoberta | e5d6cbc25a0a0e8cc90f8ae2c25e07d7069d2dd9 | 2021-06-24T13:47:03.000Z | [
"pytorch",
"roberta",
"fill-mask",
"sentence-transformers",
"feature-extraction"
] | feature-extraction | false | ligolab | null | ligolab/DxRoberta | 1 | null | sentence-transformers | 29,868 | ---
pipeline_tag: feature-extraction
tags:
- sentence-transformers
---
## Testing Sentence Transformer
This Roberta model is trained from scratch using Masked Language Modelling task on a collection of medical reports |
limter/DialoGPT-medium-krish | da8e2bbdff277d299bcab3bafff59833d9238bd0 | 2021-06-10T04:28:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | limter | null | limter/DialoGPT-medium-krish | 1 | null | transformers | 29,869 | Entry not found |
lonePatient/albert_chinese_small | 9fae890b5b646258f046e9b86e40b4b79c300916 | 2020-04-24T16:02:11.000Z | [
"pytorch",
"albert",
"transformers"
] | null | false | lonePatient | null | lonePatient/albert_chinese_small | 1 | null | transformers | 29,870 | Entry not found |
lonewanderer27/YuriBot | aa359ff8c960ad64f4b56ceeb4544b2a5db40df6 | 2022-02-08T12:30:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | lonewanderer27 | null | lonewanderer27/YuriBot | 1 | null | transformers | 29,871 | ---
tags:
- conversational
---
# Camp Buddy - Yuri - DialoGPTMedium Model |
longcld/t5_small_checkpoint | 41f9e4e2dadbb51ba924bcd9f705c3258fdf07d7 | 2021-07-14T21:49:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | longcld | null | longcld/t5_small_checkpoint | 1 | null | transformers | 29,872 | Entry not found |
longcld/t5_small_squad_trans_old | 0b6e5f5c0a55c6629cfe1bd14564ca4b7e6c5b85 | 2021-07-25T14:15:39.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | longcld | null | longcld/t5_small_squad_trans_old | 1 | null | transformers | 29,873 | Entry not found |
loodos/albert-base-turkish-uncased | 3275004703c3ea35b5dcde5b684b707d32e5a69e | 2020-12-11T21:49:21.000Z | [
"pytorch",
"tf",
"albert",
"tr",
"transformers"
] | null | false | loodos | null | loodos/albert-base-turkish-uncased | 1 | null | transformers | 29,874 | ---
language: tr
---
# Turkish Language Models with Huggingface's Transformers
As R&D Team at Loodos, we release cased and uncased versions of most recent language models for Turkish. More details about pretrained models and evaluations on downstream tasks can be found [here (our repo)](https://github.com/Loodos/turkish-language-models).
# Turkish ALBERT-Base (uncased)
This is ALBERT-Base model which has 12 repeated encoder layers with 768 hidden layer size trained on uncased Turkish dataset.
## Usage
Using AutoModel and AutoTokenizer from Transformers, you can import the model as described below.
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("loodos/albert-base-turkish-uncased", do_lower_case=False, keep_accents=True)
model = AutoModel.from_pretrained("loodos/albert-base-turkish-uncased")
normalizer = TextNormalization()
normalized_text = normalizer.normalize(text, do_lower_case=True, is_turkish=True)
tokenizer.tokenize(normalized_text)
```
### Notes on Tokenizers
Currently, Huggingface's tokenizers (which were written in Python) have a bug concerning letters "ı, i, I, İ" and non-ASCII Turkish specific letters. There are two reasons.
1- Vocabulary and sentence piece model is created with NFC/NFKC normalization but tokenizer uses NFD/NFKD. NFD/NFKD normalization changes text that contains Turkish characters I-ı, İ-i, Ç-ç, Ö-ö, Ş-ş, Ğ-ğ, Ü-ü. This causes wrong tokenization, wrong training and loss of information. Some tokens are never trained.(like "şanlıurfa", "öğün", "çocuk" etc.) NFD/NFKD normalization is not proper for Turkish.
2- Python's default ```string.lower()``` and ```string.upper()``` make the conversions
- "I" and "İ" to 'i'
- 'i' and 'ı' to 'I'
respectively. However, in Turkish, 'I' and 'İ' are two different letters.
We opened an [issue](https://github.com/huggingface/transformers/issues/6680) in Huggingface's github repo about this bug. Until it is fixed, in case you want to train your model with uncased data, we provide a simple text normalization module (`TextNormalization()` in the code snippet above) in our [repo](https://github.com/Loodos/turkish-language-models).
## Details and Contact
You contact us to ask a question, open an issue or give feedback via our github [repo](https://github.com/Loodos/turkish-language-models).
## Acknowledgments
Many thanks to TFRC Team for providing us cloud TPUs on Tensorflow Research Cloud to train our models.
|
ltrctelugu/roberta_ltrc_telugu | 72c75264ab3d2a07a4a95cf50023508e3e30165c | 2021-05-20T17:39:55.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ltrctelugu | null | ltrctelugu/roberta_ltrc_telugu | 1 | null | transformers | 29,875 | Entry not found |
lucio/xls-r-uyghur-cv7 | c6145a275ebf4b96ab19743a3e8126dd4d9c2187 | 2022-03-24T11:58:28.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ug",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lucio | null | lucio/xls-r-uyghur-cv7 | 1 | 1 | transformers | 29,876 | ---
language:
- ug
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- ug
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M Uyghur CV7
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: ug
metrics:
- name: Test WER
type: wer
value: 25.845
- name: Test CER
type: cer
value: 4.795
---
# XLS-R-300M Uyghur CV7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1772
- Wer: 0.2589
## Model description
For a description of the model architecture, see [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
The model vocabulary consists of the alphabetic characters of the [Perso-Arabic script for the Uyghur language](https://omniglot.com/writing/uyghur.htm), with punctuation removed.
## Intended uses & limitations
This model is expected to be of some utility for low-fidelity use cases such as:
- Draft video captions
- Indexing of recorded broadcasts
The model is not reliable enough to use as a substitute for live captions for accessibility purposes, and it should not be used in a manner that would infringe the privacy of any of the contributors to the Common Voice dataset nor any other speakers.
## Training and evaluation data
The combination of `train` and `dev` of common voice official splits were used as training data. The official `test` split was used as validation data as well as for final evaluation.
## Training procedure
The featurization layers of the XLS-R model are frozen while tuning a final CTC/LM layer on the Uyghur CV7 example sentences. A ramped learning rate is used with an initial warmup phase of 2000 steps, a max of 0.0001, and cooling back towards 0 for the remainder of the 18500 steps (100 epochs).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.3043 | 2.73 | 500 | 3.2415 | 1.0 |
| 3.0482 | 5.46 | 1000 | 2.9591 | 1.0 |
| 1.4767 | 8.2 | 1500 | 0.4779 | 0.5777 |
| 1.3152 | 10.93 | 2000 | 0.3697 | 0.4938 |
| 1.2246 | 13.66 | 2500 | 0.3084 | 0.4459 |
| 1.1781 | 16.39 | 3000 | 0.2842 | 0.4154 |
| 1.1351 | 19.13 | 3500 | 0.2615 | 0.3929 |
| 1.1052 | 21.86 | 4000 | 0.2462 | 0.3747 |
| 1.0711 | 24.59 | 4500 | 0.2366 | 0.3652 |
| 1.035 | 27.32 | 5000 | 0.2268 | 0.3557 |
| 1.0277 | 30.05 | 5500 | 0.2243 | 0.3450 |
| 1.002 | 32.79 | 6000 | 0.2204 | 0.3389 |
| 0.9837 | 35.52 | 6500 | 0.2156 | 0.3349 |
| 0.9773 | 38.25 | 7000 | 0.2127 | 0.3289 |
| 0.9807 | 40.98 | 7500 | 0.2142 | 0.3274 |
| 0.9582 | 43.72 | 8000 | 0.2004 | 0.3142 |
| 0.9548 | 46.45 | 8500 | 0.2022 | 0.3050 |
| 0.9251 | 49.18 | 9000 | 0.2019 | 0.3035 |
| 0.9103 | 51.91 | 9500 | 0.1964 | 0.3021 |
| 0.915 | 54.64 | 10000 | 0.1970 | 0.3032 |
| 0.8962 | 57.38 | 10500 | 0.2007 | 0.3046 |
| 0.8729 | 60.11 | 11000 | 0.1967 | 0.2942 |
| 0.8744 | 62.84 | 11500 | 0.1952 | 0.2885 |
| 0.874 | 65.57 | 12000 | 0.1894 | 0.2895 |
| 0.8457 | 68.31 | 12500 | 0.1895 | 0.2828 |
| 0.8519 | 71.04 | 13000 | 0.1912 | 0.2875 |
| 0.8301 | 73.77 | 13500 | 0.1878 | 0.2760 |
| 0.8226 | 76.5 | 14000 | 0.1808 | 0.2701 |
| 0.8071 | 79.23 | 14500 | 0.1849 | 0.2741 |
| 0.7999 | 81.97 | 15000 | 0.1808 | 0.2717 |
| 0.7947 | 84.7 | 15500 | 0.1821 | 0.2716 |
| 0.7783 | 87.43 | 16000 | 0.1824 | 0.2661 |
| 0.7729 | 90.16 | 16500 | 0.1773 | 0.2639 |
| 0.7759 | 92.9 | 17000 | 0.1767 | 0.2629 |
| 0.7713 | 95.63 | 17500 | 0.1780 | 0.2621 |
| 0.7628 | 98.36 | 18000 | 0.1773 | 0.2594 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
lucius/distilgpt2-finetuned-wikitext2 | da3a83b55ce483b31da75c922e035c5e21a6a964 | 2021-10-17T09:45:49.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | lucius | null | lucius/distilgpt2-finetuned-wikitext2 | 1 | null | transformers | 29,877 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_d-truncated-f2d1db | c79f845cc397f67a1f45d3c280be96cb7b3ee87e | 2021-07-03T02:07:16.000Z | [
"pytorch",
"transfo-xl",
"transformers"
] | null | false | luffycodes | null | luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_d-truncated-f2d1db | 1 | null | transformers | 29,878 | Entry not found |
luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_d-truncated-68f3ff | 60c74a6ede4b8b9fbf1665fb78b57c68afd4b986 | 2021-07-02T15:40:46.000Z | [
"pytorch",
"transfo-xl",
"transformers"
] | null | false | luffycodes | null | luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_d-truncated-68f3ff | 1 | null | transformers | 29,879 | Entry not found |
luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_d-truncated-a4da87 | cdd8607162cee66c73f52b88fea90e5805789b02 | 2021-07-06T13:34:14.000Z | [
"pytorch",
"transfo-xl",
"transformers"
] | null | false | luffycodes | null | luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_d-truncated-a4da87 | 1 | null | transformers | 29,880 | Entry not found |
luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_d-truncated-97f2fb | 5e3731cc7d8a950eb2e67595b9ace5601d2930cd | 2021-07-04T01:49:04.000Z | [
"pytorch",
"transfo-xl",
"transformers"
] | null | false | luffycodes | null | luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_d-truncated-97f2fb | 1 | null | transformers | 29,881 | Entry not found |
luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_d-truncated-cf5b17 | 419aa1985f9ad94cfe8911eeab4812465c9b2252 | 2021-07-03T08:12:30.000Z | [
"pytorch",
"transfo-xl",
"transformers"
] | null | false | luffycodes | null | luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_d-truncated-cf5b17 | 1 | null | transformers | 29,882 | Entry not found |
luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_d-truncated-7c4c0c | 76bd4806772aaa2f463be06ca8d5367da59f9fda | 2021-07-07T18:43:05.000Z | [
"pytorch",
"transfo-xl",
"transformers"
] | null | false | luffycodes | null | luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_d-truncated-7c4c0c | 1 | null | transformers | 29,883 | Entry not found |
luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_d-truncated-a504ec | 70de3165c13c66be7dc46e923331aa5640647913 | 2021-07-03T01:29:32.000Z | [
"pytorch",
"transfo-xl",
"transformers"
] | null | false | luffycodes | null | luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_d-truncated-a504ec | 1 | null | transformers | 29,884 | Entry not found |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_wu_7k_grad_adam | fd55c2c1ab2442280fde975e506152dd0165e6d9 | 2021-10-29T21:12:12.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_wu_7k_grad_adam | 1 | null | transformers | 29,885 | Entry not found |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_3e6_bb_lr_3e6_wu_7k_grad_adam_mask | d8e960d41bc19c9ab454c7efa84be4b82ecc5258 | 2021-11-03T04:45:59.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_3e6_bb_lr_3e6_wu_7k_grad_adam_mask | 1 | null | transformers | 29,886 | Entry not found |
luffycodes/bb_narataka_roberta_large_nli_bsz_32_bb_bsz_32_nli_lr_1e5_bb_lr_1e5_wu_7k_grad_adam_mask | f1dab40f1c76b2b2f590d3cf0d15b1089660b564 | 2021-10-30T05:59:04.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_32_bb_bsz_32_nli_lr_1e5_bb_lr_1e5_wu_7k_grad_adam_mask | 1 | null | transformers | 29,887 | Entry not found |
luffycodes/mrpc_luffy_mnli_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_wu_250_ep_10 | 3ee6a0e7d3a59922c60a5b5ad965f399e2364df9 | 2021-11-08T06:09:34.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/mrpc_luffy_mnli_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_wu_250_ep_10 | 1 | null | transformers | 29,888 | Entry not found |
luffycodes/om_roberta_mnli_lr1e5_ep_10.model | 6b481bcff7ed6337e52cb179aae4514bb2dac791 | 2021-12-02T06:43:47.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/om_roberta_mnli_lr1e5_ep_10.model | 1 | null | transformers | 29,889 | Entry not found |
luigisbrother/wav2vec2-common_voice-mls-dist | 1db5719635b63e20cc1c8fa117ea6ec5f0f6a861 | 2021-10-20T11:07:21.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | luigisbrother | null | luigisbrother/wav2vec2-common_voice-mls-dist | 1 | null | transformers | 29,890 | Entry not found |
lukabor/europarl-mlm | fbb4348311fd0e2bdb1a0414047a71bfef4d6358 | 2021-05-19T22:10:58.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | lukabor | null | lukabor/europarl-mlm | 1 | null | transformers | 29,891 | Entry not found |
lulueve3/DialoGPT-medium-Kokkoro | 157ea40b8f98f53c8460e0cb92cdd0c276bb25c0 | 2021-09-19T15:55:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | lulueve3 | null | lulueve3/DialoGPT-medium-Kokkoro | 1 | null | transformers | 29,892 | ---
tags:
- conversational
---
# Kokkoro DialoGPT Model |
lulueve3/DialoGPT-medium-Kokkoro2 | f95021a62959c75f12e46d3908fe9ce8be38609d | 2021-09-20T01:57:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | lulueve3 | null | lulueve3/DialoGPT-medium-Kokkoro2 | 1 | null | transformers | 29,893 | ---
tags:
- conversational
---
# Kokkoro DialoGPT Model |
lysandre/test_dynamic_model | ea81e34daf7f331ee8807664804dc5957ca6582a | 2022-01-27T14:44:29.000Z | [
"pytorch",
"new-model",
"transformers"
] | null | false | lysandre | null | lysandre/test_dynamic_model | 1 | null | transformers | 29,894 | Entry not found |
lysandre/tiny-random-detr | ec0aa259c2c0e0707a490f540c5bda2c799e917c | 2021-07-24T15:02:13.000Z | [
"pytorch",
"detr",
"transformers"
] | null | false | lysandre | null | lysandre/tiny-random-detr | 1 | null | transformers | 29,895 | Entry not found |
lyx10290516/model202109 | f16c0b69208658f349fb8de6d4af3e1e1f13b070 | 2021-09-03T03:12:38.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | lyx10290516 | null | lyx10290516/model202109 | 1 | null | transformers | 29,896 | Entry not found |
lyx10290516/model_cntest | aa1d8febafdf365a959c508bf3a2ac9adb38fb17 | 2021-09-04T11:09:35.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | lyx10290516 | null | lyx10290516/model_cntest | 1 | null | transformers | 29,897 | Entry not found |
m3hrdadfi/icelandic-ner-distilbert | 209c24ea56f570bc2daf9582e3db5c357d1c45fa | 2021-05-27T17:17:28.000Z | [
"pytorch",
"tf",
"distilbert",
"token-classification",
"is",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | m3hrdadfi | null | m3hrdadfi/icelandic-ner-distilbert | 1 | null | transformers | 29,898 | ---
language: is
license: apache-2.0
widget:
- text: "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ."
- text: "Til hvers að kjósa flokk , sem þykist vera Jafnaðarmannaflokkur rétt fyrir kosningar , þegar að það er hægt að kjósa sannnan jafnaðarmannaflokk , sjálfan Jafnaðarmannaflokk Íslands - Samfylkinguna ."
- text: "Það sannaðist svo eftirminnilega á plötunni Það þarf fólk eins og þig sem kom út fyrir þremur árum , en á henni hann Fálka úr Keflavík og Gáluna , son sinn , til að útsetja lög hans og spila inn ."
- text: "Lögin hafa áður komið út sem aukalög á smáskífum af Hail to the Thief , en á disknum er líka myndband og fleira efni fyrir tölvur ."
- text: "Britney gerði honum viðvart og hann ók henni á UCLA-sjúkrahúsið í Santa Monica en það er í nágrenni hljóðversins ."
---
# IcelandicNER DistilBERT
This model was fine-tuned on the MIM-GOLD-NER dataset for the Icelandic language.
The [MIM-GOLD-NER](http://hdl.handle.net/20.500.12537/42) corpus was developed at [Reykjavik University](https://en.ru.is/) in 2018–2020 that covered eight types of entities:
- Date
- Location
- Miscellaneous
- Money
- Organization
- Percent
- Person
- Time
## Dataset Information
| | Records | B-Date | B-Location | B-Miscellaneous | B-Money | B-Organization | B-Percent | B-Person | B-Time | I-Date | I-Location | I-Miscellaneous | I-Money | I-Organization | I-Percent | I-Person | I-Time |
|:------|----------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:|
| Train | 39988 | 3409 | 5980 | 4351 | 729 | 5754 | 502 | 11719 | 868 | 2112 | 516 | 3036 | 770 | 2382 | 50 | 5478 | 790 |
| Valid | 7063 | 570 | 1034 | 787 | 100 | 1078 | 103 | 2106 | 147 | 409 | 76 | 560 | 104 | 458 | 7 | 998 | 136 |
| Test | 8299 | 779 | 1319 | 935 | 153 | 1315 | 108 | 2247 | 172 | 483 | 104 | 660 | 167 | 617 | 10 | 1089 | 158 |
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
| entity | precision | recall | f1-score | support |
|:-------------:|:---------:|:--------:|:--------:|:-------:|
| Date | 0.969309 | 0.973042 | 0.971172 | 779.0 |
| Location | 0.941221 | 0.946929 | 0.944067 | 1319.0 |
| Miscellaneous | 0.848283 | 0.819251 | 0.833515 | 935.0 |
| Money | 0.928571 | 0.934641 | 0.931596 | 153.0 |
| Organization | 0.874147 | 0.876806 | 0.875475 | 1315.0 |
| Percent | 1.000000 | 1.000000 | 1.000000 | 108.0 |
| Person | 0.956674 | 0.972853 | 0.964695 | 2247.0 |
| Time | 0.965318 | 0.970930 | 0.968116 | 172.0 |
| micro avg | 0.926110 | 0.929141 | 0.927623 | 7028.0 |
| macro avg | 0.935441 | 0.936807 | 0.936079 | 7028.0 |
| weighted avg | 0.925578 | 0.929141 | 0.927301 | 7028.0 |
## How To Use
You use this model with Transformers pipeline for NER.
### Installing requirements
```bash
pip install transformers
```
### How to predict using pipeline
```python
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification # for pytorch
from transformers import TFAutoModelForTokenClassification # for tensorflow
from transformers import pipeline
model_name_or_path = "m3hrdadfi/icelandic-ner-distilbert"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch
# model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ."
ner_results = nlp(example)
print(ner_results)
```
## Questions?
Post a Github issue on the [IcelandicNER Issues](https://github.com/m3hrdadfi/icelandic-ner/issues) repo.
|
madbuda/DialoGPT-got-skippy | 9cb293f4f24ddcf29ffa932e4dc23c94d7077764 | 2021-11-25T04:17:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | madbuda | null | madbuda/DialoGPT-got-skippy | 1 | null | transformers | 29,899 | ---
tags:
- conversational
---
# My Awesome Model |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.