modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
lucasresck/distilbert-base-uncased-finetuned-squad | cbcfe98089900b349e5a8fba21ce80661adcdb68 | 2021-11-15T17:04:05.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | lucasresck | null | lucasresck/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 35,600 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
|
lucio/wav2vec2-large-xlsr-luganda | cd3b7faf94ab64bb71e6f6a9cd3edf64b5859edd | 2021-07-06T10:20:26.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"lg",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lucio | null | lucio/wav2vec2-large-xlsr-luganda | 0 | null | transformers | 35,601 | ---
language: lg
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large Luganda by Lucio
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice lg
type: common_voice
args: lg
metrics:
- name: Test WER
type: wer
value: 29.52
---
# Wav2Vec2-Large-XLSR-53-lg
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Luganda using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset, using train, validation and other (excluding voices that are in the test set), and taking the test data for validation as well as test.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lg", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-luganda")
model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-luganda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Luganda test data of Common Voice. (Available in Colab [here](https://colab.research.google.com/drive/1XxZ3mJOEXwIn-QH3C23jD_Qpom9aA1vH?usp=sharing).)
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import unidecode
test_dataset = load_dataset("common_voice", "lg", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-luganda")
model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-luganda")
model.to("cuda")
chars_to_ignore_regex = '[\[\],?.!;:%"“”(){}‟ˮʺ″«»/…‽�–]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
def remove_special_characters(batch):
# word-internal apostrophes are marking contractions
batch["norm_text"] = re.sub(r'[‘’´`]', r"'", batch["sentence"])
# most other punctuation is ignored
batch["norm_text"] = re.sub(chars_to_ignore_regex, "", batch["norm_text"]).lower().strip()
batch["norm_text"] = re.sub(r"(-|' | '| +)", " ", batch["norm_text"])
# remove accents from a few characters (from loanwords, not tones)
batch["norm_text"] = unidecode.unidecode(batch["norm_text"])
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
test_dataset = test_dataset.map(remove_special_characters)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["norm_text"])))
```
**Test Result**: 29.52 %
## Training
The Common Voice `train`, `validation` and `other` datasets were used for training, excluding voices that are in both the `other` and `test` datasets. The data was augmented to twice the original size with added noise and manipulated pitch, phase and intensity.
Training proceeded for 60 epochs, on 1 V100 GPU provided by OVHcloud. The `test` data was used for validation.
The [script used for training](https://github.com/serapio/transformers/blob/feature/xlsr-finetune/examples/research_projects/wav2vec2/run_common_voice.py) is adapted from the [example script provided in the transformers repo](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py). |
luffycodes/bb_narataka_roberta_base_nli_bsz_16_bb_bsz_16_nli_lr_3e6_bb_lr_3e6_wu_7k_grad_adam_mask | 4e3271a2c49e85a16dbc4c5f6560947922e8c3f7 | 2021-11-03T17:25:56.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/bb_narataka_roberta_base_nli_bsz_16_bb_bsz_16_nli_lr_3e6_bb_lr_3e6_wu_7k_grad_adam_mask | 0 | null | transformers | 35,602 | Entry not found |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_2e5_bb_lr_2e5 | 907d44627bc79364ec71e71c90ce258481909156 | 2021-10-25T16:59:58.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_2e5_bb_lr_2e5 | 0 | null | transformers | 35,603 | Entry not found |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_2e5_bb_lr_2e5_norm_bb_mlm_loss | 4f084c4744aef420f46ea4669830e8f96fdffcb6 | 2021-10-24T14:55:52.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_2e5_bb_lr_2e5_norm_bb_mlm_loss | 0 | null | transformers | 35,604 | Entry not found |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_2e6_bb_lr_2e6_wu_7k_grad_adam_mask | a1392fbe9a9bd05a37d28cb28ac3746612ae8b2f | 2021-11-04T02:48:31.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_2e6_bb_lr_2e6_wu_7k_grad_adam_mask | 0 | null | transformers | 35,605 | Entry not found |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_7e6_bb_lr_7e6_wu_7k_grad_adam_mask | 792f8a5e0af3e33a4d14b6d189e9fba9a1b06b20 | 2021-11-02T18:17:17.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_7e6_bb_lr_7e6_wu_7k_grad_adam_mask | 0 | null | transformers | 35,606 | Entry not found |
luffycodes/mrpc_roberta_large_mnli_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_wu_250_ep_10_clone | 8adca56292164d7b7b3bd29408ef9291cf39ffc3 | 2021-11-08T06:29:17.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/mrpc_roberta_large_mnli_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_wu_250_ep_10_clone | 0 | null | transformers | 35,607 | Entry not found |
luoweijie/dummy-model | 30c1a5bd6412743db1864193dffef3ab061634d4 | 2021-10-21T01:13:17.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | luoweijie | null | luoweijie/dummy-model | 0 | null | transformers | 35,608 | Entry not found |
lysandre/dummy-test | 1159f35738592b3acc22d40dc1e4325b0cc078aa | 2021-04-20T22:23:28.000Z | [
"pytorch"
] | null | false | lysandre | null | lysandre/dummy-test | 0 | null | null | 35,609 | Entry not found |
lysandre/my-cool-arxiv-model | a28ec738712afc973dbf9ba1c37b236898fd7525 | 2021-11-11T00:09:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | lysandre | null | lysandre/my-cool-arxiv-model | 0 | null | transformers | 35,610 | Entry not found |
m-lin20/satellite-instrument-bert-NER | 3615ce471a73a103813e569357dfbb3a2ba489cd | 2021-12-13T08:11:18.000Z | [
"pytorch",
"bert",
"token-classification",
"pt",
"transformers",
"autotrain_compatible"
] | token-classification | false | m-lin20 | null | m-lin20/satellite-instrument-bert-NER | 0 | 1 | transformers | 35,611 | ---
language: "pt"
widget:
- text: "Poised for launch in mid-2021, the joint NASA-USGS Landsat 9 mission will continue this important data record. In many respects Landsat 9 is a clone of Landsat-8. The Operational Land Imager-2 (OLI-2) is largely identical to Landsat 8 OLI, providing calibrated imagery covering the solar reflected wavelengths. The Thermal Infrared Sensor-2 (TIRS-2) improves upon Landsat 8 TIRS, addressing known issues including stray light incursion and a malfunction of the instrument scene select mirror. In addition, Landsat 9 adds redundancy to TIRS-2, thus upgrading the instrument to a 5-year design life commensurate with other elements of the mission. Initial performance testing of OLI-2 and TIRS-2 indicate that the instruments are of excellent quality and expected to match or improve on Landsat 8 data quality. "
example_title: "example 1"
- text: "Compared to its predecessor, Jason-3, the two AMR-C radiometer instruments have an external calibration system which enables higher radiometric stability accomplished by moving the secondary mirror between well-defined targets. Sentinel-6 allows continuing the study of the ocean circulation, climate change, and sea-level rise for at least another decade. Besides the external calibration for the AMR heritage radiometer (18.7, 23.8, and 34 GHz channels), the AMR-C contains a high-resolution microwave radiometer (HRMR) with radiometer channels at 90, 130, and 168 GHz. This subsystem allows for a factor of 5× higher spatial resolution at coastal transitions. This article presents a brief description of the instrument and the measured performance of the completed AMR-C-A and AMR-C-B instruments."
example_title: "example 2"
- text: "Landsat 9 will continue the Landsat data record into its fifth decade with a near-copy build of Landsat 8 with launch scheduled for December 2020. The two instruments on Landsat 9 are Thermal Infrared Sensor-2 (TIRS-2) and Operational Land Imager-2 (OLI-2)."
example_title: "example 3"
inference:
parameters:
aggregation_strategy: "first"
---
# satellite-instrument-bert-NER
For details, please visit the [GitHub link](https://github.com/Tsinghua-mLin/satellite-instrument-NER). |
m3hrdadfi/wav2vec2-large-xlsr-estonian | 9da22e1c44ea2f82f9ad2fd81e244ab95c155bfb | 2021-07-06T10:28:26.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"et",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | m3hrdadfi | null | m3hrdadfi/wav2vec2-large-xlsr-estonian | 0 | null | transformers | 35,612 | ---
language: et
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
widget:
- label: Common Voice sample 1123
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-estonian/resolve/main/sample1123.flac
- label: Common Voice sample 910
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-estonian/resolve/main/sample910.flac
model-index:
- name: XLSR Wav2Vec2 Estonian by Mehrdad Farahani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice et
type: common_voice
args: et
metrics:
- name: Test WER
type: wer
value: 33.93
---
# Wav2Vec2-Large-XLSR-53-Estonian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Estonian using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
**Requirements**
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
!pip install jiwer
```
**Prediction**
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset
import numpy as np
import re
import string
import IPython.display as ipd
chars_to_ignore = [
",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�",
"#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"',
"“", "%", "‘", "�", "–", "…", "_", "”", '“', '„'
]
chars_to_mapping = {
"\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ",
}
def multiple_replace(text, chars_to_mapping):
pattern = "|".join(map(re.escape, chars_to_mapping.keys()))
return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text))
def remove_special_characters(text, chars_to_ignore_regex):
text = re.sub(chars_to_ignore_regex, '', text).lower() + " "
return text
def normalizer(batch, chars_to_ignore, chars_to_mapping):
chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]"""
text = batch["sentence"].lower().strip()
text = text.replace("\u0307", " ").strip()
text = multiple_replace(text, chars_to_mapping)
text = remove_special_characters(text, chars_to_ignore_regex)
batch["sentence"] = text
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian").to(device)
dataset = load_dataset("common_voice", "et", split="test[:1%]")
dataset = dataset.map(
normalizer,
fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
max_items = np.random.randint(0, len(result), 10).tolist()
for i in max_items:
reference, predicted = result["sentence"][i], result["predicted"][i]
print("reference:", reference)
print("predicted:", predicted)
print('---')
```
**Output:**
```text
reference: õhulossid lagunevad ning ees ootab maapind
predicted: õhulassid lagunevad ning ees ootab maapind
---
reference: milliseks kiievisse pääsemise nimel võistlev muusik soome muusikamaastiku hetkeseisu hindab ning kas ta ka ennast sellel tulevikus tegutsemas näeb kuuled videost
predicted: milliseks gievisse pääsemise nimel võitlev muusiks soome muusikama aastiku hetke seisu hindab ning kas ta ennast selle tulevikus tegutsemast näeb kuulad videost
---
reference: näiteks kui pool seina on tehtud tekib tunne et tahaks tegelikult natuke teistsugust ja hakkame otsast peale
predicted: näiteks kui pool seine on tehtud tekib tunnetahaks tegelikult matuka teistsugust jahappanna otsast peane
---
reference: neuroesteetilised katsed näitavad et just nägude vaatlemine aktiveerib inimese aju esteetilist keskust
predicted: neuroaisteetiliselt katsed näitaval et just nägude vaatlemine aptiveerid inimese aju est eedilist keskust
---
reference: paljud inimesed kindlasti kadestavad teid kuid ei julge samamoodi vabalt võtta
predicted: paljud inimesed kindlasti kadestavadteid kuid ei julge sama moodi vabalt võtta
---
reference: parem on otsida pileteid inkognito veebi kaudu
predicted: parem on otsida pileteid ning kognitu veebikaudu
---
reference: ja vot siin ma jäin vaikseks
predicted: ja vat siisma ja invaikseks
---
reference: mida sa iseendale juubeli puhul soovid
predicted: mida saise endale jubeli puhul soovid
---
reference: kuumuse ja kõrge temperatuuri tõttu kuivas tühjadel karjamaadel rohi mis muutus kergesti süttivaks
predicted: kuumuse ja kõrge temperatuuri tõttu kuivast ühjadal karjamaadel rohi mis muutus kergesti süttivaks
---
reference: ilmselt on inimesi kelle jaoks on see hea lahendus
predicted: ilmselt on inimesi kelle jaoks on see hea lahendus
---
```
## Evaluation
The model can be evaluated as follows on the Estonian test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset, load_metric
import numpy as np
import re
import string
chars_to_ignore = [
",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�",
"#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"',
"“", "%", "‘", "�", "–", "…", "_", "”", '“', '„'
]
chars_to_mapping = {
"\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ",
}
def multiple_replace(text, chars_to_mapping):
pattern = "|".join(map(re.escape, chars_to_mapping.keys()))
return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text))
def remove_special_characters(text, chars_to_ignore_regex):
text = re.sub(chars_to_ignore_regex, '', text).lower() + " "
return text
def normalizer(batch, chars_to_ignore, chars_to_mapping):
chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]"""
text = batch["sentence"].lower().strip()
text = text.replace("\u0307", " ").strip()
text = multiple_replace(text, chars_to_mapping)
text = remove_special_characters(text, chars_to_ignore_regex)
batch["sentence"] = text
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian").to(device)
dataset = load_dataset("common_voice", "et", split="test")
dataset = dataset.map(
normalizer,
fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
wer = load_metric("wer")
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"])))
```
**Test Result**:
- WER: 33.93%
## Training & Report
The Common Voice `train`, `validation` datasets were used for training.
You can see the training states [here](https://wandb.ai/m3hrdadfi/finetuned_wav2vec_xlsr_estonian/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Estonian--Vmlldzo1NjA1MTI?accessToken=k2b2g3a2i12m1sdwf13q8b226pplmmyw12joxo6vk38eb4djellfzmn9fp2725fw)
The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Estonian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb) |
m3hrdadfi/wav2vec2-large-xlsr-georgian | a36185cbc4f4cdea7625ceab52bdeaa8d754079f | 2021-11-04T15:22:05.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ka",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | m3hrdadfi | null | m3hrdadfi/wav2vec2-large-xlsr-georgian | 0 | null | transformers | 35,613 | ---
language: ka
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
widget:
- example_title: Common Voice sample 566
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-georgian/resolve/main/sample566.flac
- example_title: Common Voice sample 95
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-georgian/resolve/main/sample95.flac
model-index:
- name: XLSR Wav2Vec2 Georgian by Mehrdad Farahani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ka
type: common_voice
args: ka
metrics:
- name: Test WER
type: wer
value: 43.86
---
# Wav2Vec2-Large-XLSR-53-Georgian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Georgian using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
**Requirements**
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
!pip install jiwer
```
**Normalizer**
```bash
!wget -O normalizer.py https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-lithuanian/raw/main/normalizer.py
```
**Prediction**
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset
import numpy as np
import re
import string
import IPython.display as ipd
from normalizer import normalizer
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian").to(device)
dataset = load_dataset("common_voice", "ka", split="test[:1%]")
dataset = dataset.map(
normalizer,
fn_kwargs={"remove_extra_space": True},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
max_items = np.random.randint(0, len(result), 20).tolist()
for i in max_items:
reference, predicted = result["sentence"][i], result["predicted"][i]
print("reference:", reference)
print("predicted:", predicted)
print('---')
```
**Output:**
```text
reference: პრეზიდენტობისას ბუში საქართველოს და უკრაინის დემოკრატიულ მოძრაობების და ნატოში გაწევრიანების აქტიური მხარდამჭერი იყო
predicted: პრეზიდენტო ვისას ბუში საქართველოს და უკრაინის დემოკრატიულ მოძრაობების და ნატიში დაწევრიანების აქტიური მხარდამჭერი იყო
---
reference: შესაძლებელია მისი დამონება და მსახურ დემონად გადაქცევა
predicted: შესაძლებელია მისი დამონებათ და მსახურდემანად გადაქცევა
---
reference: ეს გამოსახულებები აღბეჭდილი იყო მოსკოვის დიდი მთავრებისა და მეფეების ბეჭდებზე
predicted: ეს გამოსახულებები აღბეჭდილი იყო მოსკოვის დიდი მთავრებისა და მეფეების ბეჭდებზე
---
reference: ჯოლიმ ოქროს გლობუსისა და კინომსახიობთა გილდიის ნომინაციები მიიღო
predicted: ჯოლი მოქროს გლობუსისა და კინამსახიობთა გილდიის ნომინაციები მიიღო
---
reference: შემდგომში საქალაქო ბიბლიოთეკა სარაიონო ბიბლიოთეკად გადაკეთდა გაიზარდა წიგნადი ფონდი
predicted: შემდღომში საქალაქო ბიბლიოთეკა სარაიონო ბიბლიოთეკად გადაკეთა გაიზარდა წიგნადი ფოვდი
---
reference: აბრამსი დაუკავშირდა მირანდას და ორი თვის განმავლობაში ისინი მუშაობდნენ აღნიშნული სცენის თანმხლებ მელოდიაზე
predicted: აბრამში და უკავშირდა მირანდეს და ორითვის განმავლობაში ისინი მუშაობდნენა აღნიშნულის ჩენის მთამხლევით მელოდიაში
---
reference: ამჟამად თემთა პალატის ოპოზიციის ლიდერია ლეიბორისტული პარტიის ლიდერი ჯერემი კორბინი
predicted: ამჟამად თემთა პალატის ოპოზიციის ლიდერია ლეიბურისტული პარტიის ლიდერი ჯერემი კორვინი
---
reference: ორი
predicted: ორი
---
reference: მას შემდეგ იგი კოლექტივის მუდმივი წევრია
predicted: მას შემდეგ იგი კოლექტივის ფუდ მივი წევრია
---
reference: აზერბაიჯანულ ფილოსოფიას შეიძლება მივაკუთვნოთ რუსეთის საზოგადო მოღვაწე ჰეიდარ ჯემალი
predicted: აზერგვოიჯანალ ფილოსოფიას შეიძლება მივაკუთვნოთ რუსეთის საზოგადო მოღვაწე ჰეიდარ ჯემალი
---
reference: ბრონქსში ჯერომის ავენიუ ჰყოფს გამჭოლ ქუჩებს აღმოსავლეთ და დასავლეთ ნაწილებად
predicted: რონგში დერომიწ ავენილ პოფს გამ დოლფურქებს აღმოსავლეთ და დასავლეთ ნაწილებად
---
reference: ჰაერი არის ჟანგბადის ის ძირითადი წყარო რომელსაც საჭიროებს ყველა ცოცხალი ორგანიზმი
predicted: არი არის ჯამუბადესის ძირითადი წყარო რომელსაც საჭიროოებს ყველა ცოცხალი ორგანიზმი
---
reference: ჯგუფი უმეტესწილად ასრულებს პოპმუსიკის ჟანრის სიმღერებს
predicted: ჯგუფიუმეტესწევად ასრულებს პოპნუსიკის ჟანრის სიმრერებს
---
reference: ბაბილინა მუდმივად ცდილობდა შესაძლებლობების ფარგლებში მიეღო ცოდნა და ახალი ინფორმაცია
predicted: ბაბილინა მუდმივა ცდილობდა შესაძლებლობების ფარგლებში მიიღო ცოტნა და ახალი ინფორმაცია
---
reference: მრევლის რწმენით რომელი ჯგუფიც გაიმარჯვებდა მთელი წლის მანძილზე სიუხვე და ბარაქა არ მოაკლდებოდა
predicted: მრევრის რწმენით რომელიჯგუფის გაიმარჯვებდა მთელიჭლის მანძილზა სიუყვეტაბარაქა არ მოაკლდებოდა
---
reference: ნინო ჩხეიძეს განსაკუთრებული ღვაწლი მიუძღვის ქუთაისისა და რუსთაველის თეატრების შემოქმედებით ცხოვრებაში
predicted: მინო ჩხეიძეს განსაკუთრებული ღოვაწლი მიოცხვის ქუთაისისა და რუსთაველის თეატრების შემოქმედებით ცხოვრებაში
---
reference: იგი სამი დიალექტისგან შედგება
predicted: იგი სამი დიალეთის გან შედგება
---
reference: ფორმით სირაქლემებს წააგვანან
predicted: ომიცი რაქლემებს ააგვანამ
---
reference: დანი დაიბადა კოლუმბუსში ოჰაიოში
predicted: დონი დაიბაოდა კოლუმბუსში ოხვაიოში
---
reference: მშენებლობისათვის გამოიყო ადგილი ყოფილი აეროპორტის რაიონში
predicted: შენებლობისათვის გამოიყო ადგილი ყოფილი აეროპორტის რაიონში
---
```
## Evaluation
The model can be evaluated as follows on the Georgian test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset, load_metric
import numpy as np
import re
import string
from normalizer import normalizer
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian").to(device)
dataset = load_dataset("common_voice", "ka", split="test")
dataset = dataset.map(
normalizer,
fn_kwargs={"remove_extra_space": True},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
wer = load_metric("wer")
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"])))
```
**Test Result**:
- WER: 43.86%
## Training & Report
The Common Voice `train`, `validation` datasets were used for training.
You can see the training states [here](https://wandb.ai/m3hrdadfi/wav2vec2_large_xlsr_ka/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Georgian--Vmlldzo1OTQyMzk?accessToken=ytf7jseje66a3byuheh68o6a7215thjviscv5k2ewl5hgq9yqr50yxbko0bnf1d3)
The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Georgian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
## Questions?
Post a Github issue on the [Wav2Vec](https://github.com/m3hrdadfi/wav2vec) repo. |
machine2049/bert-base-uncased-finetuned-duorc_bert | bcf888f76b5283f9b7d7834d48d5f35f7f0cb231 | 2021-12-10T01:35:56.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | machine2049 | null | machine2049/bert-base-uncased-finetuned-duorc_bert | 0 | null | transformers | 35,614 | Entry not found |
machine2049/distilbert-base-uncased-finetuned-duorc_distilbert | dafc2bc5d556cf4f9a81537377ec13bafdfb767c | 2021-12-15T11:28:23.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | machine2049 | null | machine2049/distilbert-base-uncased-finetuned-duorc_distilbert | 0 | null | transformers | 35,615 | Entry not found |
machine2049/distilbert-base-uncased-finetuned-squad | 95538030d1e556a24d7e0e7de72910c7a5287037 | 2021-12-09T02:02:44.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | machine2049 | null | machine2049/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 35,616 | Entry not found |
machine2049/distilbert-base-uncased-finetuned-squad_distilbert | afe851c9ac95c88f64e7ab8c34ff587e55ff03f4 | 2021-12-09T21:34:25.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | machine2049 | null | machine2049/distilbert-base-uncased-finetuned-squad_distilbert | 0 | null | transformers | 35,617 | Entry not found |
mackei/ai12 | e1ee76e7e8db7c24df770cdadf7313c33ba8aed5 | 2022-02-10T12:31:44.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mackei | null | mackei/ai12 | 0 | null | transformers | 35,618 | Entry not found |
macleginn/bert-multilingual-paws | 9bb81812143a0832c99457b5f56701634b35cfd9 | 2021-11-11T17:20:55.000Z | [
"pytorch"
] | null | false | macleginn | null | macleginn/bert-multilingual-paws | 0 | null | null | 35,619 | Entry not found |
madbuda/DialoGPT-medium-skippy | b5f4a12243b2fda2bb147b6f3c9d553b5e39e166 | 2021-11-25T02:45:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | madbuda | null | madbuda/DialoGPT-medium-skippy | 0 | null | transformers | 35,620 | ---
tags:
- conversational
---
# My Awesome Model |
madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2 | ead15fce67e3003ae1ea873316afbfe2f057fc0e | 2021-05-19T22:45:40.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | madlag | null | madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2 | 0 | null | transformers | 35,621 | Used [run.sh](https://huggingface.co/madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2/blob/main/run.sh) used to train using transformers/example/question_answering code.
Evaluation results : F1= 85.85 , a much better result than the original 81.9 from the BERT paper, due to the use of the "whole-word-masking" variation.
```
{
"HasAns_exact": 80.58367071524967,
"HasAns_f1": 86.64594807945029,
"HasAns_total": 5928,
"NoAns_exact": 85.06307821698907,
"NoAns_f1": 85.06307821698907,
"NoAns_total": 5945,
"best_exact": 82.82658131895899,
"best_exact_thresh": 0.0,
"best_f1": 85.85337995578023,
"best_f1_thresh": 0.0,
"epoch": 2.0,
"eval_samples": 12134,
"exact": 82.82658131895899,
"f1": 85.85337995578037,
"total": 11873
}
``` |
maelfabien/marcel_customer_service_medium_masked | 62fc91f340d0023d578f276e2110c956ec36d3d8 | 2021-04-14T13:27:45.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | maelfabien | null | maelfabien/marcel_customer_service_medium_masked | 0 | null | transformers | 35,622 | Entry not found |
maelfabien/marcel_customer_service_xlarge | c695bafef3b7956b9b68688e0a522f09ed7e7b67 | 2021-04-14T12:42:05.000Z | [
"pytorch",
"camembert",
"text-generation",
"transformers"
] | text-generation | false | maelfabien | null | maelfabien/marcel_customer_service_xlarge | 0 | null | transformers | 35,623 | Entry not found |
maelfabien/marcel_customer_service_xlarge_masked | 7b51181c07fad816229aaf6eacaf667b3327b19c | 2021-04-14T13:21:49.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | maelfabien | null | maelfabien/marcel_customer_service_xlarge_masked | 0 | null | transformers | 35,624 | Entry not found |
magnusgray/mlm_imdb_model | dd26b472d4aebb7bd23dab7e2613df27ff2d3d42 | 2021-09-10T15:25:13.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | magnusgray | null | magnusgray/mlm_imdb_model | 0 | null | transformers | 35,625 | Entry not found |
majonez57/JoeBot | d3d5905b087c4c1fd70851428ac173d4aa33ff06 | 2021-08-28T20:54:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | majonez57 | null | majonez57/JoeBot | 0 | null | transformers | 35,626 | ---
tags:
- conversational
---
# DialoGPT Joe Bot |
malloc/OpenNMT-py-English-German-Transformer | 1510d9c7b581db9d3a69310184dc4f6646611ec7 | 2022-01-18T20:18:11.000Z | [
"de",
"en",
"dataset:WMT",
"translation",
"pytorch",
"license:mit"
] | translation | false | malloc | null | malloc/OpenNMT-py-English-German-Transformer | 0 | 1 | null | 35,627 | ---
language:
- de
- en
tags:
- translation
- pytorch
license: mit
datasets:
- WMT
metrics:
- bleu
---
# OpenNMT-py-English-German-Transformer
[OpenNMT-py](https://github.com/OpenNMT/OpenNMT-py) is the PyTorch version of the OpenNMT project, an open-source (MIT) neural machine translation framework.
OpenNMT has several [pretrained models](https://opennmt.net/Models-py/). This one is trained particularly for English to German translation.
- Configuration: Base Transformer configuration with [standard training options](http://opennmt.net/OpenNMT-py/FAQ.html#how-do-i-use-the-transformer-model-do-you-support-multi-gpu)
- Data: WMT with shared SentencePiece model
- BLEU:
- newstest2014 = 26.89
- newstest2017 = 28.09 |
malloc/OpenNMT-py-German-English-2-layer-BiLSTM | a8c3eeb52928bb9b1c27142dd542b5436e4029c4 | 2022-01-18T20:22:23.000Z | [
"de",
"en",
"dataset:IWSLT ‘14 DE-EN",
"translation",
"pytorch",
"license:mit"
] | translation | false | malloc | null | malloc/OpenNMT-py-German-English-2-layer-BiLSTM | 0 | null | null | 35,628 | ---
language:
- de
- en
tags:
- translation
- pytorch
license: mit
datasets:
- IWSLT ‘14 DE-EN
metrics:
- bleu
---
# OpenNMT-py-English-German-Transformer
[OpenNMT-py](https://github.com/OpenNMT/OpenNMT-py) is the PyTorch version of the OpenNMT project, an open-source (MIT) neural machine translation framework.
OpenNMT has several [pretrained models](https://opennmt.net/Models-py/). This one is trained particularly for German to English translation.
- Configuration: 2-layer BiLSTM with hidden size 500 trained for 20 epochs
- Data: IWSLT ‘14 DE-EN
- BLEU: 30.33 |
manandey/wav2vec2-large-xlsr-assamese | 420e13fd515a7965deb8cd331615c5f9b2b39644 | 2021-07-06T11:22:54.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"as",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | manandey | null | manandey/wav2vec2-large-xlsr-assamese | 0 | null | transformers | 35,629 | ---
language: as
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Assamese by Manan Dey
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice as
type: common_voice
args: as
metrics:
- name: Test WER
type: wer
value: 74.25
---
# Wav2Vec2-Large-XLSR-53-Assamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Assamese using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "as", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-assamese")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-assamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "as", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-assamese")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-assamese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\'\।]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 74.25%
## Training
The Common Voice `train`, `validation` datasets were used for training. |
manandey/wav2vec2-large-xlsr-breton | 3ada51d2d5601b1a8e17d6de4892fa47e5ef137a | 2021-07-06T11:29:55.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"br",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | manandey | null | manandey/wav2vec2-large-xlsr-breton | 0 | null | transformers | 35,630 | ---
language: br
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Breton by Manan Dey
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice br
type: common_voice
args: br
metrics:
- name: Test WER
type: wer
value: 54.04
---
# Wav2Vec2-Large-XLSR-53-Breton
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Breton using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "br", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-breton")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-breton")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "br", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-breton")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-breton")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)\/\«\»\½\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 54.04%
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
manandey/wav2vec2-large-xlsr-tamil | 37d6f1b8798addda2d6ed3db8beb1605f0ed9525 | 2022-03-25T16:52:49.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ta",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | manandey | null | manandey/wav2vec2-large-xlsr-tamil | 0 | null | transformers | 35,631 | ---
language: ta
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Tamil by Manan Dey
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ta
type: common_voice
args: ta
metrics:
- name: Test WER
type: wer
value: 56.44
---
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-tamil")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-tamil")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-tamil")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-tamil")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 56.44%
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
manav/dialogpt-medium-berkeley-reddit | 82f7042a320b0184b2e44e123bb83996aa9dc166 | 2021-05-23T08:52:16.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | manav | null | manav/dialogpt-medium-berkeley-reddit | 0 | null | transformers | 35,632 | ---
tags:
- conversational
---
## Model description
Finetuned version of DialogPT-medium released. Finetuned on data scraped from the r/Berkeley subreddit. The data wasn't thoroughly vetted so the model may display biases that I am unaware of, so tread with caution when this model until further analysis of its biases can be performed. |
manraf/DialoGPT-smmall-harrypotter | 4564dc206513a615d4e5177b0edc97c1ee73e2f7 | 2021-09-01T13:00:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | manraf | null | manraf/DialoGPT-smmall-harrypotter | 0 | null | transformers | 35,633 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
manudotc/transformers_distilbert-base-uncased_finetuneQA_squad | d18b5315e565c30f5242035d18e39e7ecd27fc76 | 2021-04-12T07:45:36.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | manudotc | null | manudotc/transformers_distilbert-base-uncased_finetuneQA_squad | 0 | null | transformers | 35,634 | Entry not found |
manueltonneau/clinicalcovid-bert-nli | 9a0bad1bfc8238d32bc4f08784b303adb165180c | 2021-05-19T22:59:04.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | manueltonneau | null | manueltonneau/clinicalcovid-bert-nli | 0 | null | transformers | 35,635 | Entry not found |
marcopost-it/biobert-it | d07a9c181d80bcc111d99e39feb6558be1848975 | 2022-02-16T14:15:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | marcopost-it | null | marcopost-it/biobert-it | 0 | null | transformers | 35,636 | Hi!
This model has been trained on Italian biomedical data.
For further information, do not hesitate to send me a message! ;)
[email protected] (Marco Postiglione) |
markverschuren/bert-base-dutch-cased-finetuned-mark | e7d44e4b60332b2f0b874f4ba4dadcd91eca1464 | 2021-08-05T16:15:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | fill-mask | false | markverschuren | null | markverschuren/bert-base-dutch-cased-finetuned-mark | 0 | null | transformers | 35,637 | ---
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: bert-base-dutch-cased-finetuned-mark
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-dutch-cased-finetuned-mark
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9739 | 1.0 | 844 | 1.6029 |
| 1.7437 | 2.0 | 1688 | 1.5662 |
| 1.6901 | 3.0 | 2532 | 1.5520 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
marma/wav2vec2-large-xlsr-swedish | 9d83a19efb8341e768c321b3a356227a01f93022 | 2021-07-06T12:28:48.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"sv",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | marma | null | marma/wav2vec2-large-xlsr-swedish | 0 | null | transformers | 35,638 | ---
language: sv
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Swedish by Marma
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice sv-SE
type: common_voice
args: sv
metrics:
- name: Test WER
type: wer
value: 23.33
---
# Wav2Vec2-Large-XLSR-53-Swedish
This model has moved [here](https://huggingface.co/KBLab/wav2vec2-large-xlsr-53-swedish) |
masa0711/sentence-bert-base-ja-mean-tokens-v2 | bc62f78a510d7f6cdf7dc33cf8942c568545ea05 | 2021-12-16T07:31:40.000Z | [
"pytorch"
] | null | false | masa0711 | null | masa0711/sentence-bert-base-ja-mean-tokens-v2 | 0 | null | null | 35,639 | Entry not found |
masakhane/m2m100_418M_fon_fr_rel_news | 5d91fd517c9f56ac64c53be53735ddab211e8f81 | 2022-04-16T18:55:19.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"fr",
"fon",
"dataset:JW300 + [LAFAND](https://github.com/masakhane-io/lafand-mt)",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fon_fr_rel_news | 0 | null | transformers | 35,640 | Hugging Face's logo
---
language:
- fr
- fon
datasets:
- JW300 + [LAFAND](https://github.com/masakhane-io/lafand-mt)
---
# m2m100_418M-fon-fr-mt
## Model description
**m2m100_418M-fon-fr-mt** is a **machine translation** model from Fon to French based on a fine-tuned facebook/m2m100_418M model. It establishes a **baseline** for automatically translating texts from Fon to French.
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
Specifically, this model is a *m2m100_418M* model that was fine-tuned on JW300 Fon corpus and [LAFAND](https://github.com/masakhane-io/lafand-mt).
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning m2m100_418M achieves **7.81 BLEU** on [LAFAND test set](https://github.com/masakhane-io/lafand-mt)
### BibTeX entry and citation info
By David Adelani
```
```
|
masapasa/xls-r-300m-it-cv8 | 7b5dfc83ead625ed69ebdbbc2845326ef267526c | 2022-02-02T13:29:28.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | masapasa | null | masapasa/xls-r-300m-it-cv8 | 0 | 1 | transformers | 35,641 | language:
- it
license: apache-2.0
tags:
- robust-speech-event
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0278
- Wer: 1.0286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.7838 | 0.01 | 5 | 14.5035 | 1.0 |
| 13.0582 | 0.03 | 10 | 13.6658 | 1.0 |
| 7.3034 | 0.04 | 15 | 9.7898 | 1.0 |
| 6.1847 | 0.05 | 20 | 6.9148 | 1.0 |
| 5.3371 | 0.07 | 25 | 5.3661 | 1.0 |
| 4.4274 | 0.08 | 30 | 4.6945 | 1.0 |
| 4.0918 | 0.1 | 35 | 4.3172 | 1.0 |
| 4.1734 | 0.11 | 40 | 4.0759 | 1.0 |
| 3.7332 | 0.12 | 45 | 3.9039 | 1.0 |
| 3.6871 | 0.14 | 50 | 3.7777 | 1.0 |
| 3.4428 | 0.15 | 55 | 3.6718 | 1.0 |
| 3.5514 | 0.16 | 60 | 3.5947 | 1.0 |
| 3.4307 | 0.18 | 65 | 3.5144 | 1.0 |
| 3.4102 | 0.19 | 70 | 3.4432 | 1.0 |
| 3.4964 | 0.21 | 75 | 3.3890 | 1.0 |
| 3.3936 | 0.22 | 80 | 3.3467 | 1.0 |
| 3.3051 | 0.23 | 85 | 3.3102 | 1.0 |
| 3.278 | 0.25 | 90 | 3.2801 | 1.0 |
| 3.2223 | 0.26 | 95 | 3.2440 | 1.0 |
| 3.1888 | 0.27 | 100 | 3.2900 | 1.0 |
| 3.218 | 0.29 | 105 | 3.2627 | 1.0 |
| 3.1308 | 0.3 | 110 | 3.2152 | 1.0 |
| 3.109 | 0.31 | 115 | 3.1686 | 1.0 |
| 3.1188 | 0.33 | 120 | 3.1734 | 1.0 |
| 3.1132 | 0.34 | 125 | 3.1431 | 1.0 |
| 3.0667 | 0.36 | 130 | 3.1686 | 1.0 |
| 3.1167 | 0.37 | 135 | 3.1885 | 1.0 |
| 3.0592 | 0.38 | 140 | 3.1100 | 1.0 |
| 3.0531 | 0.4 | 145 | 3.1149 | 1.0 |
| 3.1224 | 0.41 | 150 | 3.1205 | 1.0 |
| 3.0651 | 0.42 | 155 | 3.1101 | 1.0 |
| 3.0077 | 0.44 | 160 | 3.0980 | 1.0 |
| 3.0027 | 0.45 | 165 | 3.1132 | 1.0 |
| 3.0423 | 0.47 | 170 | 3.0886 | 1.0 |
| 3.0462 | 0.48 | 175 | 3.0865 | 1.0 |
| 3.0701 | 0.49 | 180 | 3.0863 | 1.0 |
| 3.0871 | 0.51 | 185 | 3.0825 | 1.0 |
| 3.0585 | 0.52 | 190 | 3.0720 | 1.0 |
| 3.0274 | 0.53 | 195 | 3.0736 | 1.0 |
| 3.0983 | 0.55 | 200 | 3.0658 | 1.0 |
| 3.0538 | 0.56 | 205 | 3.1241 | 1.0 |
| 3.0862 | 0.57 | 210 | 3.0573 | 1.0 |
| 3.0041 | 0.59 | 215 | 3.0608 | 1.0 |
| 3.027 | 0.6 | 220 | 3.0614 | 1.0 |
| 2.9916 | 0.62 | 225 | 3.0527 | 1.0 |
| 3.0157 | 0.63 | 230 | 3.0514 | 1.0 |
| 3.0429 | 0.64 | 235 | 3.0391 | 1.0 |
| 2.999 | 0.66 | 240 | 3.0462 | 1.0 |
| 3.0053 | 0.67 | 245 | 3.0438 | 1.0 |
| 2.9812 | 0.68 | 250 | 3.0447 | 1.0 |
| 3.0062 | 0.7 | 255 | 3.0660 | 1.0 |
| 3.0045 | 0.71 | 260 | 3.0103 | 1.0 |
| 2.9684 | 0.73 | 265 | 3.0106 | 1.0 |
| 2.9885 | 0.74 | 270 | 3.0014 | 1.0 |
| 3.0062 | 0.75 | 275 | 2.9885 | 1.0 |
| 2.9736 | 0.77 | 280 | 3.0330 | 1.0 |
| 2.9766 | 0.78 | 285 | 2.9910 | 1.0 |
| 2.9545 | 0.79 | 290 | 2.9972 | 1.0 |
| 2.9936 | 0.81 | 295 | 2.9872 | 1.0 |
| 3.0832 | 0.82 | 300 | 2.9978 | 1.0 |
| 2.974 | 0.83 | 305 | 2.9978 | 1.0 |
| 2.9846 | 0.85 | 310 | 2.9849 | 1.0 |
| 2.9554 | 0.86 | 315 | 2.9810 | 1.0 |
| 2.9524 | 0.88 | 320 | 2.9731 | 1.0 |
| 2.9426 | 0.89 | 325 | 2.9824 | 1.0 |
| 2.9416 | 0.9 | 330 | 2.9731 | 1.0 |
| 2.9705 | 0.92 | 335 | 2.9830 | 1.0 |
| 2.9502 | 0.93 | 340 | 2.9713 | 1.0 |
| 2.9393 | 0.94 | 345 | 2.9790 | 1.0 |
| 2.9336 | 0.96 | 350 | 2.9684 | 1.0 |
| 2.9542 | 0.97 | 355 | 2.9689 | 1.0 |
| 2.9408 | 0.98 | 360 | 2.9556 | 1.0 |
| 2.9544 | 1.0 | 365 | 2.9563 | 1.0 |
| 2.9187 | 1.01 | 370 | 2.9624 | 1.0 |
| 2.9935 | 1.03 | 375 | 2.9500 | 1.0 |
| 2.9803 | 1.04 | 380 | 2.9558 | 1.0 |
| 2.9867 | 1.05 | 385 | 2.9473 | 1.0 |
| 2.8925 | 1.07 | 390 | 2.9444 | 1.0 |
| 2.9633 | 1.08 | 395 | 2.9490 | 1.0 |
| 2.9191 | 1.1 | 400 | 2.9362 | 1.0 |
| 2.9081 | 1.11 | 405 | 2.9394 | 1.0 |
| 2.9381 | 1.12 | 410 | 2.9846 | 1.0 |
| 2.9271 | 1.14 | 415 | 2.9638 | 1.0 |
| 2.959 | 1.15 | 420 | 2.9835 | 1.0 |
| 2.9486 | 1.16 | 425 | 2.9361 | 1.0 |
| 2.9246 | 1.18 | 430 | 2.9615 | 1.0 |
| 2.923 | 1.19 | 435 | 2.9313 | 1.0 |
| 2.8908 | 1.21 | 440 | 2.9362 | 1.0 |
| 2.8976 | 1.22 | 445 | 2.9224 | 1.0 |
| 2.9278 | 1.23 | 450 | 2.9276 | 1.0 |
| 2.8429 | 1.25 | 455 | 2.9299 | 1.0 |
| 2.867 | 1.26 | 460 | 2.9258 | 1.0 |
| 2.9734 | 1.27 | 465 | 2.9281 | 1.0000 |
| 2.934 | 1.29 | 470 | 2.9229 | 1.0 |
| 2.9521 | 1.3 | 475 | 2.9134 | 1.0 |
| 2.9098 | 1.31 | 480 | 2.9051 | 0.9993 |
| 2.9112 | 1.33 | 485 | 2.9028 | 0.9999 |
| 2.8799 | 1.34 | 490 | 2.9101 | 0.9986 |
| 2.857 | 1.36 | 495 | 2.9005 | 0.9992 |
| 2.8525 | 1.37 | 500 | 2.8937 | 1.0 |
| 2.8682 | 1.38 | 505 | 2.8904 | 1.0000 |
| 2.8899 | 1.4 | 510 | 2.8914 | 0.9964 |
| 2.7475 | 1.41 | 515 | 2.8842 | 0.9950 |
| 2.9263 | 1.42 | 520 | 2.8852 | 0.9972 |
| 2.8603 | 1.44 | 525 | 2.8762 | 0.9966 |
| 2.864 | 1.45 | 530 | 2.8680 | 0.9978 |
| 2.8632 | 1.47 | 535 | 2.8602 | 0.9964 |
| 2.9289 | 1.48 | 540 | 2.8584 | 0.9952 |
| 2.8689 | 1.49 | 545 | 2.8587 | 0.9956 |
| 2.8304 | 1.51 | 550 | 2.8511 | 0.9993 |
| 2.8024 | 1.52 | 555 | 2.8460 | 1.0 |
| 2.7649 | 1.53 | 560 | 2.8460 | 1.0000 |
| 2.8756 | 1.55 | 565 | 2.8348 | 0.9987 |
| 2.8808 | 1.56 | 570 | 2.8539 | 0.9993 |
| 2.9027 | 1.57 | 575 | 2.8282 | 0.9975 |
| 2.8586 | 1.59 | 580 | 2.8288 | 0.9976 |
| 2.8193 | 1.6 | 585 | 2.8101 | 1.0051 |
| 2.811 | 1.62 | 590 | 2.7965 | 1.0014 |
| 2.7332 | 1.63 | 595 | 2.7884 | 1.0026 |
| 2.7717 | 1.64 | 600 | 2.7883 | 1.0060 |
| 2.6901 | 1.66 | 605 | 2.7801 | 0.9974 |
| 2.6905 | 1.67 | 610 | 2.8113 | 0.9968 |
| 2.7442 | 1.68 | 615 | 2.8113 | 1.0007 |
| 2.8431 | 1.7 | 620 | 2.8152 | 1.0343 |
| 2.8028 | 1.71 | 625 | 2.7790 | 1.0250 |
| 2.7151 | 1.73 | 630 | 2.7653 | 1.0287 |
| 2.7405 | 1.74 | 635 | 2.7714 | 1.1303 |
| 2.7566 | 1.75 | 640 | 2.7488 | 1.0312 |
| 2.7337 | 1.77 | 645 | 2.7498 | 1.0176 |
| 2.7486 | 1.78 | 650 | 2.7496 | 1.0760 |
| 2.6918 | 1.79 | 655 | 2.7391 | 1.0353 |
| 2.7142 | 1.81 | 660 | 2.7500 | 1.0283 |
| 2.7057 | 1.82 | 665 | 2.7612 | 1.0127 |
| 2.8348 | 1.83 | 670 | 2.7441 | 1.0056 |
| 2.705 | 1.85 | 675 | 2.7473 | 1.0519 |
| 2.7547 | 1.86 | 680 | 2.7216 | 1.0218 |
| 2.7045 | 1.88 | 685 | 2.7261 | 1.1414 |
| 2.7121 | 1.89 | 690 | 2.7223 | 1.0287 |
| 2.6877 | 1.9 | 695 | 2.7283 | 1.0274 |
| 2.6879 | 1.92 | 700 | 2.7451 | 1.1322 |
| 2.6958 | 1.93 | 705 | 2.7166 | 1.0364 |
| 2.6692 | 1.94 | 710 | 2.7148 | 1.0074 |
| 2.5786 | 1.96 | 715 | 2.7101 | 1.0504 |
| 2.6919 | 1.97 | 720 | 2.6963 | 1.0454 |
| 2.7256 | 1.98 | 725 | 2.7201 | 1.0349 |
| 2.6507 | 2.0 | 730 | 2.7099 | 1.1339 |
| 2.7833 | 2.01 | 735 | 2.7111 | 1.0124 |
| 2.7521 | 2.03 | 740 | 2.7024 | 1.0275 |
| 2.6732 | 2.04 | 745 | 2.7058 | 1.0647 |
| 2.719 | 2.05 | 750 | 2.7200 | 1.0211 |
| 2.701 | 2.07 | 755 | 2.7024 | 1.0808 |
| 2.6444 | 2.08 | 760 | 2.6813 | 1.0582 |
| 2.5592 | 2.1 | 765 | 2.6783 | 1.1010 |
| 2.6444 | 2.11 | 770 | 2.6707 | 1.0946 |
| 2.6944 | 2.12 | 775 | 2.7012 | 1.1315 |
| 2.6733 | 2.14 | 780 | 2.7072 | 1.1144 |
| 2.6998 | 2.15 | 785 | 2.7132 | 1.0206 |
| 2.796 | 2.16 | 790 | 2.7076 | 1.1262 |
| 2.6881 | 2.18 | 795 | 2.6953 | 1.0841 |
| 2.7382 | 2.19 | 800 | 2.6605 | 1.1234 |
| 2.5814 | 2.21 | 805 | 2.6814 | 1.1865 |
| 2.6695 | 2.22 | 810 | 2.6531 | 1.0985 |
| 2.6415 | 2.23 | 815 | 2.6590 | 1.0804 |
| 2.646 | 2.25 | 820 | 2.6514 | 1.0853 |
| 2.6028 | 2.26 | 825 | 2.6723 | 1.1411 |
| 2.6429 | 2.27 | 830 | 2.6729 | 1.0395 |
| 2.6736 | 2.29 | 835 | 2.7039 | 1.0355 |
| 2.6959 | 2.3 | 840 | 2.6510 | 1.0414 |
| 2.6426 | 2.31 | 845 | 2.6660 | 1.1591 |
| 2.7152 | 2.33 | 850 | 2.6361 | 1.0276 |
| 2.7148 | 2.34 | 855 | 2.6723 | 1.2461 |
| 2.6336 | 2.36 | 860 | 2.6332 | 1.0310 |
| 2.665 | 2.37 | 865 | 2.6365 | 1.1312 |
| 2.5607 | 2.38 | 870 | 2.6344 | 1.1301 |
| 2.5614 | 2.4 | 875 | 2.6437 | 1.1513 |
| 2.4899 | 2.41 | 880 | 2.6418 | 1.1532 |
| 2.6794 | 2.42 | 885 | 2.6403 | 1.0272 |
| 2.6814 | 2.44 | 890 | 2.6420 | 1.1323 |
| 2.6614 | 2.45 | 895 | 2.6183 | 1.0525 |
| 2.6629 | 2.47 | 900 | 2.6414 | 1.1569 |
| 2.6166 | 2.48 | 905 | 2.6167 | 1.0265 |
| 2.6374 | 2.49 | 910 | 2.6299 | 1.1720 |
| 2.6035 | 2.51 | 915 | 2.6139 | 1.1565 |
| 2.595 | 2.52 | 920 | 2.6126 | 1.0557 |
| 2.6416 | 2.53 | 925 | 2.6190 | 1.0414 |
| 2.6785 | 2.55 | 930 | 2.6352 | 1.0289 |
| 2.6986 | 2.56 | 935 | 2.6268 | 1.0077 |
| 2.6145 | 2.57 | 940 | 2.6166 | 1.0445 |
| 2.6961 | 2.59 | 945 | 2.6142 | 1.0185 |
| 2.6852 | 2.6 | 950 | 2.6072 | 1.0122 |
| 2.5792 | 2.62 | 955 | 2.6078 | 1.1165 |
| 2.6118 | 2.63 | 960 | 2.6177 | 1.1210 |
| 2.5472 | 2.64 | 965 | 2.6126 | 1.0044 |
| 2.577 | 2.66 | 970 | 2.6051 | 1.0881 |
| 2.5602 | 2.67 | 975 | 2.5992 | 1.0178 |
| 2.695 | 2.68 | 980 | 2.6023 | 1.0248 |
| 2.7017 | 2.7 | 985 | 2.6190 | 1.0041 |
| 2.6327 | 2.71 | 990 | 2.6024 | 1.0142 |
| 2.6193 | 2.73 | 995 | 2.5897 | 1.0148 |
| 2.5939 | 2.74 | 1000 | 2.5900 | 1.0329 |
| 2.5477 | 2.75 | 1005 | 2.5971 | 1.0338 |
| 2.6089 | 2.77 | 1010 | 2.5969 | 1.0064 |
| 2.5625 | 2.78 | 1015 | 2.5899 | 1.0648 |
| 2.5745 | 2.79 | 1020 | 2.5861 | 1.0627 |
| 2.5702 | 2.81 | 1025 | 2.5923 | 1.0526 |
| 2.645 | 2.82 | 1030 | 2.6053 | 1.0199 |
| 2.6869 | 2.83 | 1035 | 2.6227 | 1.0011 |
| 2.6678 | 2.85 | 1040 | 2.6094 | 1.0179 |
| 2.6787 | 2.86 | 1045 | 2.5978 | 1.0028 |
| 2.6246 | 2.88 | 1050 | 2.5965 | 1.0093 |
| 2.5676 | 2.89 | 1055 | 2.5927 | 1.0627 |
| 2.6773 | 2.9 | 1060 | 2.5907 | 1.0817 |
| 2.6114 | 2.92 | 1065 | 2.5932 | 1.1013 |
| 2.6227 | 2.93 | 1070 | 2.5840 | 1.0402 |
| 2.594 | 2.94 | 1075 | 2.5997 | 1.1371 |
| 2.751 | 2.96 | 1080 | 2.5909 | 1.0972 |
| 2.6366 | 2.97 | 1085 | 2.6081 | 1.0598 |
| 2.577 | 2.98 | 1090 | 2.5915 | 1.0410 |
| 2.579 | 3.0 | 1095 | 2.5953 | 1.1433 |
| 2.6706 | 3.01 | 1100 | 2.5913 | 1.0456 |
| 2.6161 | 3.03 | 1105 | 2.6079 | 1.1009 |
| 2.6397 | 3.04 | 1110 | 2.5951 | 1.1771 |
| 2.6246 | 3.05 | 1115 | 2.5730 | 1.0299 |
| 2.5637 | 3.07 | 1120 | 2.5622 | 1.0848 |
| 2.5692 | 3.08 | 1125 | 2.5561 | 1.1472 |
| 2.5948 | 3.1 | 1130 | 2.5568 | 1.0802 |
| 2.5372 | 3.11 | 1135 | 2.5638 | 1.1261 |
| 2.4995 | 3.12 | 1140 | 2.5727 | 1.1395 |
| 2.6304 | 3.14 | 1145 | 2.5671 | 1.0259 |
| 2.6395 | 3.15 | 1150 | 2.5778 | 1.0212 |
| 2.6127 | 3.16 | 1155 | 2.5609 | 1.0457 |
| 2.5919 | 3.18 | 1160 | 2.5604 | 1.0902 |
| 2.6111 | 3.19 | 1165 | 2.5463 | 1.0014 |
| 2.5971 | 3.21 | 1170 | 2.5429 | 1.0022 |
| 2.5887 | 3.22 | 1175 | 2.5394 | 1.0412 |
| 2.5644 | 3.23 | 1180 | 2.5342 | 1.0469 |
| 2.4805 | 3.25 | 1185 | 2.6066 | 1.2668 |
| 2.5324 | 3.26 | 1190 | 2.5395 | 1.0234 |
| 2.5491 | 3.27 | 1195 | 2.5431 | 1.0644 |
| 2.6302 | 3.29 | 1200 | 2.5558 | 1.0680 |
| 2.6139 | 3.3 | 1205 | 2.5711 | 1.0565 |
| 2.5607 | 3.31 | 1210 | 2.5635 | 1.0415 |
| 2.6535 | 3.33 | 1215 | 2.5505 | 1.0613 |
| 2.6129 | 3.34 | 1220 | 2.5403 | 1.0724 |
| 2.5157 | 3.36 | 1225 | 2.5294 | 1.0585 |
| 2.551 | 3.37 | 1230 | 2.5242 | 1.1599 |
| 2.5527 | 3.38 | 1235 | 2.5474 | 1.2327 |
| 2.4964 | 3.4 | 1240 | 2.5244 | 1.0857 |
| 2.5781 | 3.41 | 1245 | 2.5299 | 1.0470 |
| 2.6143 | 3.42 | 1250 | 2.5313 | 1.0019 |
| 2.6566 | 3.44 | 1255 | 2.5431 | 1.0488 |
| 2.5373 | 3.45 | 1260 | 2.5281 | 1.0901 |
| 2.6597 | 3.47 | 1265 | 2.5300 | 1.0610 |
| 2.5457 | 3.48 | 1270 | 2.5130 | 1.0420 |
| 2.5632 | 3.49 | 1275 | 2.5306 | 1.1418 |
| 2.5267 | 3.51 | 1280 | 2.5021 | 1.0293 |
| 2.507 | 3.52 | 1285 | 2.5013 | 1.0196 |
| 2.5713 | 3.53 | 1290 | 2.4978 | 1.0664 |
| 2.4783 | 3.55 | 1295 | 2.4958 | 1.0530 |
| 2.5874 | 3.56 | 1300 | 2.4968 | 1.0059 |
| 2.5744 | 3.57 | 1305 | 2.5078 | 1.0287 |
| 2.5701 | 3.59 | 1310 | 2.4971 | 1.0366 |
| 2.5366 | 3.6 | 1315 | 2.4897 | 1.0191 |
| 2.5679 | 3.62 | 1320 | 2.4830 | 1.0223 |
| 2.5239 | 3.63 | 1325 | 2.4833 | 1.0784 |
| 2.5411 | 3.64 | 1330 | 2.4851 | 1.1522 |
| 2.5037 | 3.66 | 1335 | 2.4792 | 1.0928 |
| 2.5907 | 3.67 | 1340 | 2.4750 | 1.0187 |
| 2.5107 | 3.68 | 1345 | 2.4805 | 1.0873 |
| 2.5908 | 3.7 | 1350 | 2.4753 | 1.0098 |
| 2.6274 | 3.71 | 1355 | 2.4765 | 1.0045 |
| 2.5708 | 3.73 | 1360 | 2.4597 | 1.0456 |
| 2.6039 | 3.74 | 1365 | 2.4503 | 1.0485 |
| 2.5305 | 3.75 | 1370 | 2.4439 | 1.0126 |
| 2.4878 | 3.77 | 1375 | 2.4407 | 1.0162 |
| 2.5055 | 3.78 | 1380 | 2.4421 | 1.0605 |
| 2.5249 | 3.79 | 1385 | 2.4499 | 1.1163 |
| 2.5508 | 3.81 | 1390 | 2.4654 | 1.1472 |
| 2.5827 | 3.82 | 1395 | 2.4510 | 1.0561 |
| 2.6148 | 3.83 | 1400 | 2.4496 | 0.9998 |
| 2.5763 | 3.85 | 1405 | 2.4417 | 1.0067 |
| 2.6077 | 3.86 | 1410 | 2.4458 | 1.0682 |
| 2.5388 | 3.88 | 1415 | 2.4352 | 1.0820 |
| 2.5235 | 3.89 | 1420 | 2.4277 | 1.0784 |
| 2.4996 | 3.9 | 1425 | 2.4245 | 1.0671 |
| 2.5601 | 3.92 | 1430 | 2.4202 | 1.0650 |
| 2.5805 | 3.93 | 1435 | 2.4199 | 1.0530 |
| 2.5841 | 3.94 | 1440 | 2.4228 | 1.0797 |
| 2.4877 | 3.96 | 1445 | 2.4284 | 1.1159 |
| 2.5542 | 3.97 | 1450 | 2.4190 | 1.0575 |
| 2.5961 | 3.98 | 1455 | 2.4162 | 1.0676 |
| 2.495 | 4.0 | 1460 | 2.4165 | 1.0821 |
| 2.6157 | 4.01 | 1465 | 2.4119 | 1.0117 |
| 2.5415 | 4.03 | 1470 | 2.4089 | 1.0110 |
| 2.4916 | 4.04 | 1475 | 2.4032 | 1.0498 |
| 2.5445 | 4.05 | 1480 | 2.3997 | 1.0429 |
| 2.4941 | 4.07 | 1485 | 2.4008 | 1.0141 |
| 2.5113 | 4.08 | 1490 | 2.3975 | 1.0357 |
| 2.4707 | 4.1 | 1495 | 2.3938 | 1.0288 |
| 2.4952 | 4.11 | 1500 | 2.3910 | 1.0300 |
| 2.5017 | 4.12 | 1505 | 2.3861 | 1.0813 |
| 2.5566 | 4.14 | 1510 | 2.3919 | 1.1082 |
| 2.5754 | 4.15 | 1515 | 2.3947 | 1.0074 |
| 2.6138 | 4.16 | 1520 | 2.4040 | 0.9989 |
| 2.5024 | 4.18 | 1525 | 2.3949 | 1.0039 |
| 2.5136 | 4.19 | 1530 | 2.3993 | 1.0496 |
| 2.5646 | 4.21 | 1535 | 2.3981 | 1.0729 |
| 2.4556 | 4.22 | 1540 | 2.3952 | 1.0494 |
| 2.5774 | 4.23 | 1545 | 2.3924 | 1.0345 |
| 2.5126 | 4.25 | 1550 | 2.3888 | 1.0306 |
| 2.4596 | 4.26 | 1555 | 2.3960 | 1.0775 |
| 2.521 | 4.27 | 1560 | 2.3978 | 1.1025 |
| 2.6304 | 4.29 | 1565 | 2.3885 | 1.0433 |
| 2.543 | 4.3 | 1570 | 2.3849 | 1.0072 |
| 2.5601 | 4.31 | 1575 | 2.3855 | 1.0110 |
| 2.6304 | 4.33 | 1580 | 2.3878 | 1.0369 |
| 2.4121 | 4.34 | 1585 | 2.3783 | 1.0366 |
| 2.4261 | 4.36 | 1590 | 2.3746 | 1.0307 |
| 2.5038 | 4.37 | 1595 | 2.3789 | 1.0611 |
| 2.5391 | 4.38 | 1600 | 2.3849 | 1.0738 |
| 2.4341 | 4.4 | 1605 | 2.3779 | 1.0573 |
| 2.5306 | 4.41 | 1610 | 2.3751 | 1.0460 |
| 2.5818 | 4.42 | 1615 | 2.3743 | 1.0251 |
| 2.5531 | 4.44 | 1620 | 2.3723 | 1.0209 |
| 2.51 | 4.45 | 1625 | 2.3755 | 1.0316 |
| 2.5788 | 4.47 | 1630 | 2.3725 | 1.0396 |
| 2.5701 | 4.48 | 1635 | 2.3663 | 1.0292 |
| 2.4194 | 4.49 | 1640 | 2.3641 | 1.0261 |
| 2.5439 | 4.51 | 1645 | 2.3629 | 1.0376 |
| 2.4527 | 4.52 | 1650 | 2.3629 | 1.0563 |
| 2.5705 | 4.53 | 1655 | 2.3654 | 1.0766 |
| 2.4552 | 4.55 | 1660 | 2.3708 | 1.0802 |
| 2.5657 | 4.56 | 1665 | 2.3638 | 1.0248 |
| 2.5371 | 4.57 | 1670 | 2.3639 | 1.0053 |
| 2.5365 | 4.59 | 1675 | 2.3626 | 1.0072 |
| 2.5383 | 4.6 | 1680 | 2.3584 | 1.0170 |
| 2.546 | 4.62 | 1685 | 2.3574 | 1.0469 |
| 2.6006 | 4.63 | 1690 | 2.3517 | 1.0509 |
| 2.4894 | 4.64 | 1695 | 2.3489 | 1.0452 |
| 2.4732 | 4.66 | 1700 | 2.3489 | 1.0586 |
| 2.4933 | 4.67 | 1705 | 2.3501 | 1.0694 |
| 2.4784 | 4.68 | 1710 | 2.3472 | 1.0647 |
| 2.5349 | 4.7 | 1715 | 2.3419 | 1.0299 |
| 2.553 | 4.71 | 1720 | 2.3420 | 1.0115 |
| 2.5035 | 4.73 | 1725 | 2.3415 | 1.0117 |
| 2.561 | 4.74 | 1730 | 2.3418 | 1.0242 |
| 2.4773 | 4.75 | 1735 | 2.3420 | 1.0325 |
| 2.4691 | 4.77 | 1740 | 2.3422 | 1.0394 |
| 2.4959 | 4.78 | 1745 | 2.3405 | 1.0418 |
| 2.4928 | 4.79 | 1750 | 2.3394 | 1.0449 |
| 2.5058 | 4.81 | 1755 | 2.3392 | 1.0489 |
| 2.5193 | 4.82 | 1760 | 2.3390 | 1.0506 |
| 2.5369 | 4.83 | 1765 | 2.3392 | 1.0384 |
| 2.4843 | 4.85 | 1770 | 2.3398 | 1.0236 |
| 2.5074 | 4.86 | 1775 | 2.3400 | 1.0150 |
| 2.4941 | 4.88 | 1780 | 2.3386 | 1.0150 |
| 2.4352 | 4.89 | 1785 | 2.3370 | 1.0172 |
| 2.4372 | 4.9 | 1790 | 2.3362 | 1.0208 |
| 2.4855 | 4.92 | 1795 | 2.3358 | 1.0238 |
| 2.4516 | 4.93 | 1800 | 2.3355 | 1.0276 |
| 2.5281 | 4.94 | 1805 | 2.3356 | 1.0312 |
| 2.5519 | 4.96 | 1810 | 2.3352 | 1.0318 |
| 2.4641 | 4.97 | 1815 | 2.3349 | 1.0294 |
| 2.4515 | 4.98 | 1820 | 2.3348 | 1.0284 |
| 2.553 | 5.0 | 1825 | 2.3347 | 1.0286 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0 |
masapasa/xls-r-ab-test | 4e02879f5d02d18b606e94b221baddc2850461d6 | 2022-01-31T17:22:19.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | masapasa | null | masapasa/xls-r-ab-test | 0 | 1 | transformers | 35,642 | ---
language:
- ab
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 140.0674
- Wer: 1.1193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
maximedb/mfaq-mbert | 12aa5285df0a7549edab156844d7779aea283072 | 2021-10-10T20:28:09.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | maximedb | null | maximedb/mfaq-mbert | 0 | null | transformers | 35,643 | Entry not found |
mbateman/distilbert-base-uncased-finetuned-imdb-accelerate | a6541566266173ec5ee0d02aa8e916ba371ae0cf | 2022-01-21T09:44:13.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mbateman | null | mbateman/distilbert-base-uncased-finetuned-imdb-accelerate | 0 | null | transformers | 35,644 | Entry not found |
mbateman/distilbert-base-uncased-finetuned-squad-d5716d28 | 9a8119ec3269e3fa36abfab42f77826b061f81a9 | 2022-02-11T09:26:12.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"en",
"dataset:squad",
"arxiv:1910.01108",
"transformers",
"question-answering",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | mbateman | null | mbateman/distilbert-base-uncased-finetuned-squad-d5716d28 | 0 | null | transformers | 35,645 | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
mbateman/dummy-model | 0afca9c916aaea6d1a6746c35ddd54a6ab4b2c85 | 2021-11-26T07:56:14.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mbateman | null | mbateman/dummy-model | 0 | null | transformers | 35,646 | Entry not found |
mbateman/marian-finetuned-kde4-en-to-fr | c5b443d58bbbb8fd22cec2cfaa092c7afa0970db | 2022-01-27T17:33:02.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | mbateman | null | mbateman/marian-finetuned-kde4-en-to-fr | 0 | null | transformers | 35,647 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
mboth/sentence-eng | ade70eda561e50f36c2aa944c98394989fcdfd58 | 2021-06-25T11:41:14.000Z | [
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | mboth | null | mboth/sentence-eng | 0 | 1 | transformers | 35,648 | Entry not found |
mechanicalsea/efficient-tdnn | 049a0c766f3004f2076e5ee93f68c22501ec2615 | 2021-11-25T02:11:25.000Z | [
"en",
"dataset:voxceleb1",
"dataset:voxceleb2",
"arxiv:2103.13581",
"embeddings",
"Speaker",
"Verification",
"Identification",
"NAS",
"TDNN",
"pytorch",
"license:mit"
] | null | false | mechanicalsea | null | mechanicalsea/efficient-tdnn | 0 | 2 | null | 35,649 | ---
language:
- en
license: mit
tags:
- embeddings
- Speaker
- Verification
- Identification
- NAS
- TDNN
- pytorch
datasets:
- voxceleb1
- voxceleb2
metrics:
- EER
- minDCF:
- p_target: 0.01
---
# EfficientTDNN
This repository provides all the necessary tools to perform speaker verification with a NAS alternative, named as EfficientTDNN.
The system can be used to extract speaker embeddings with different model size.
It is trained on Voxceleb2 training data using data augmentation.
The model performance on Voxceleb1-test set(Cleaned)/Vox1-O are reported as follows.
| Supernet Stage | Subnet | MACs (3s) | Params | EER(%) | minDCF |
|:-------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|
| depth | Base | 1.45G | 5.79M | 0.94 | 0.089 |
| width 1 | Mobile | 570.98M | 2.42M | 1.41 | 0.124 |
| width 2 | Small | 204.07M | 899.20K | 2.20 | 0.219 |
The details of three subnets are:
- Base: (3, [512, 512, 512, 512], [5, 3, 3, 3], 1536)
- Mobile: (3, [384, 256, 256, 256], [5, 3, 3, 3], 768)
- Small: (2, [256, 256, 256], [3, 3, 3], 400)
## Compute your speaker embeddings
```python
import torchaudio
from sugar.models import WrappedModel
wav_file = f"{vox1_root}/id10270/x6uYqmx31kE/00001.wav"
signal, fs =torchaudio.load(wav_file)
repo_id = "mechanicalsea/efficient-tdnn"
supernet_filename = "depth/depth.torchparams"
subnet_filename = "depth/depth.ecapa-tdnn.3.512.512.512.512.5.3.3.3.1536.bn.tar"
subnet, info = WrappedModel.from_pretrained(
repo_id=repo_id, supernet_filename=supernet_filename, subnet_filename=subnet_filename)
embedding = subnet(signal)
```
## Inference on GPU
To perform inference on the GPU, add `subnet = subnet.to(device)` after calling the `from_pretrained` method.
## Model Description
Models are listed as follows.
- **Dynamic Kernel**: The model enables various kernel sizes in {1,3,5}, `kernel/kernel.torchparams`.
- **Dynamic Depth**: The model enables additional various depth in {2,3,4} based on **Dynamic Kernel** version, `depth/depth.torchparams`.
- **Dynamic Width 1**: The model enable additional various width in [0.5, 1.0] based on **Dynamic Depth** version, `width1/width1.torchparams`.
- **Dynamic Width 2**: The model enable additional various width in [0.25, 0.5] based on **Dynamic Width 1** version, `width2/width2.torchparams`.
Furthermore, some subnets are given in the form of the weights of batchnorm corresponding to their trained supernets as follows.
- **Dynamic Kernel**
1. `kernel/kernel.max.bn.tar`
2. `kernel/kernel.Kmin.bn.tar`
- **Dynamic Depth**
1. `depth/depth.max.bn.tar`
2. `depth/depth.Kmin.bn.tar`
3. `depth/depth.Dmin.bn.tar`
4. `depth/depth.3.512.5.5.3.3.1536.bn.tar`
5. `depth/depth.ecapa-tdnn.3.512.512.512.512.5.3.3.3.1536.bn.tar`
- **Dynamic Width 1**
1. `width1/width1.torchparams`
2. `width1/width1.max.bn.tar`
3. `width1/width1.Kmin.bn.tar`
4. `width1/width1.Dmin.bn.tar`
5. `width1/width1.C1min.bn.tar`
6. `width1/width1.3.383.256.256.256.5.3.3.3.768.bn.tar`
- **Dynamic Width 2**
1. `width2/width2.max.bn.tar`
2. `width2/width2.Kmin.bn.tar`
3. `width2/width2.Dmin.bn.tar`
4. `width2/width2.C1min.bn.tar`
5. `width2/width2.C2min.bn.tar`
6. `width2/width2.3.384.3.1152.bn.tar`
7. `width2/width2.3.256.256.384.384.1.3.5.3.1152.bn.tar`
8. `width2/width2.2.256.256.256.3.3.3.400.bn.tar`
The tag is described as follows.
- max: (4, [512, 512, 512, 512, 512], [5, 5, 5, 5, 5], 1536)
- Kmin: (4, [512, 512, 512, 512, 512], [1, 1, 1, 1, 1], 1536)
- Dmin: (2, [512, 512, 512], [1, 1, 1], 1536)
- C1min: (2, [256, 256, 256], [1, 1, 1], 768)
- C2min: (2, [128, 128, 128], [1, 1, 1], 384)
More details about EfficentTDNN can be found in the paper [EfficientTDNN](https://arxiv.org/abs/2103.13581).
## **Citing EfficientTDNN**
Please, cite EfficientTDNN if you use it for your research or business.
```bibtex
@article{rwang-efficienttdnn-2021,
title={{EfficientTDNN}: Efficient Architecture Search for Speaker Recognition},
author={Rui Wang and Zhihua Wei and Haoran Duan and Shouling Ji and Yang Long and Zhen Hong},
journal={arXiv preprint arXiv:2103.13581},
year={2021},
eprint={2103.13581},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2103.13581}
}
```
|
meghana/hitalmqa-finetuned-squad | fa4e0177730c508062f20ba540f927504dc4bee3 | 2021-10-19T17:34:53.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | meghana | null | meghana/hitalmqa-finetuned-squad | 0 | null | transformers | 35,650 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: hitalmqa-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hitalmqa-finetuned-squad
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
mehari/mdl | 5ec0907dec8407221673c29ea767977e91966790 | 2022-05-24T07:59:48.000Z | [
"pytorch"
] | null | false | mehari | null | mehari/mdl | 0 | null | null | 35,651 | Entry not found |
mehari/troberta | 4a63649cd4fc581c916a7585a8d3d4d10a867639 | 2022-05-24T16:46:32.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | mehari | null | mehari/troberta | 0 | null | transformers | 35,652 | Entry not found |
melon422/DialoGPT-medium-MelonBot | a8e52014f7caf56084c9169c21e8510a97192fc3 | 2022-01-13T05:08:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | melon422 | null | melon422/DialoGPT-medium-MelonBot | 0 | null | transformers | 35,653 | ---
tags:
- conversational
---
# Melon Bot DialoGPT Model |
mengsay/t5-small-finetuned-gigaword | 89bfd5d4fe8105ff4ca4366bf01c8d52fcf8c24b | 2021-09-21T06:47:07.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mengsay | null | mengsay/t5-small-finetuned-gigaword | 0 | null | transformers | 35,654 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-gigaword
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-gigaword
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.10.2
- Pytorch 1.8.1+cu101
- Datasets 1.12.1
- Tokenizers 0.10.3
|
mengsay/t5-small-t5small-gigaword | 3f8de6bb6461910242ca8b2ba7d4a5f9529df8d2 | 2021-09-23T04:37:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mengsay | null | mengsay/t5-small-t5small-gigaword | 0 | null | transformers | 35,655 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-t5small-gigaword
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-t5small-gigaword
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4052
- Rouge1: 50.1555
- Rouge2: 25.5096
- Rougel: 46.5771
- Rougelsum: 46.5827
- Gen Len: 14.246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9066 | 1.0 | 118874 | 1.4971 | 49.2994 | 24.75 | 45.8251 | 45.8162 | 14.3197 |
| 1.8339 | 2.0 | 237748 | 1.4449 | 49.6767 | 25.1673 | 46.1631 | 46.156 | 14.2557 |
| 1.8067 | 3.0 | 356622 | 1.4220 | 50.043 | 25.4886 | 46.4577 | 46.437 | 14.2857 |
| 1.8141 | 4.0 | 475496 | 1.4097 | 50.11 | 25.4327 | 46.502 | 46.5001 | 14.2653 |
| 1.7985 | 5.0 | 594370 | 1.4052 | 50.1555 | 25.5096 | 46.5771 | 46.5827 | 14.246 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.8.1+cu101
- Datasets 1.12.1
- Tokenizers 0.10.3
|
metamong1/bigbart_full_tapt_ep3_bs16_pre_noam_LB | 4b2e73317d7ccc42471dd001cd17dd90d50b1a94 | 2021-12-23T13:11:38.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | metamong1 | null | metamong1/bigbart_full_tapt_ep3_bs16_pre_noam_LB | 0 | null | transformers | 35,656 | Entry not found |
metamong1/bigbart_tapt_ep3_bs16_pre_RD | 7b4e4a30f44380b0adcc0229aede561627534023 | 2021-12-23T13:42:42.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | metamong1 | null | metamong1/bigbart_tapt_ep3_bs16_pre_RD | 0 | null | transformers | 35,657 | Entry not found |
metamong1/bigbart_tapt_ep3_bs16_pre_noam | fd0009fbcd419b10a554eb1abd5ad5c8d37175e5 | 2021-12-22T16:34:23.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | metamong1 | null | metamong1/bigbart_tapt_ep3_bs16_pre_noam | 0 | null | transformers | 35,658 | Entry not found |
mhd-mst/pure-finetuning-final | 0ff6cd2f06e0a415e22cc0cc1c241ffefa8e586e | 2022-01-18T02:55:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mhd-mst | null | mhd-mst/pure-finetuning-final | 0 | null | transformers | 35,659 | Entry not found |
mhd-mst/pure-finetuning5 | 68e2295a61a49dccc7f8193d1e488f7e5da8f170 | 2022-01-17T21:43:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mhd-mst | null | mhd-mst/pure-finetuning5 | 0 | null | transformers | 35,660 | Entry not found |
michaelrglass/dpr-ctx_encoder-multiset-base-kgi0-trex | 402c812eef15e64c8a1f5b5752cfcf31a1310b64 | 2021-04-20T18:28:13.000Z | [
"pytorch",
"dpr",
"transformers"
] | null | false | michaelrglass | null | michaelrglass/dpr-ctx_encoder-multiset-base-kgi0-trex | 0 | 1 | transformers | 35,661 | Entry not found |
michalwilk123/distilbert-imdb-positive | 7ace22fafbea92d7e87682d7e65995712967e8e8 | 2021-05-25T19:59:11.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | michalwilk123 | null | michalwilk123/distilbert-imdb-positive | 0 | null | transformers | 35,662 | distilbert model trained on positive imdb reviews |
mictiong85/wav2vec2-base-timit-demo-colab | 2c7b6753714f803af639cfbfabae64aebf26452a | 2021-11-11T11:48:08.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | mictiong85 | null | mictiong85/wav2vec2-base-timit-demo-colab | 0 | null | transformers | 35,663 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4635
- Wer: 0.3357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6808 | 4.0 | 500 | 1.5478 | 1.0481 |
| 0.835 | 8.0 | 1000 | 0.4611 | 0.4703 |
| 0.3013 | 12.0 | 1500 | 0.4327 | 0.3887 |
| 0.1741 | 16.0 | 2000 | 0.4073 | 0.3677 |
| 0.1309 | 20.0 | 2500 | 0.4306 | 0.3595 |
| 0.1097 | 24.0 | 3000 | 0.4318 | 0.3475 |
| 0.0825 | 28.0 | 3500 | 0.4635 | 0.3357 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
mimi/Waynehills-NLP-doogie-AIHub-paper-summary | 1039e161a057c4e383a93314dacedc4bdb8e01ee | 2022-01-06T08:05:30.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mimi | null | mimi/Waynehills-NLP-doogie-AIHub-paper-summary | 0 | null | transformers | 35,664 | ---
tags:
- generated_from_trainer
model-index:
- name: Waynehills-NLP-doogie-AIHub-paper-summary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Waynehills-NLP-doogie-AIHub-paper-summary
This model is a fine-tuned version of [mimi/Waynehills-NLP-doogie](https://huggingface.co/mimi/Waynehills-NLP-doogie) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.6206
- eval_runtime: 309.223
- eval_samples_per_second: 38.167
- eval_steps_per_second: 4.773
- epoch: 3.75
- step: 60000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.12.2
- Pytorch 1.10.0+cu111
- Datasets 1.5.0
- Tokenizers 0.10.3
|
mimi/Waynehills-NLP-mimi | 4a42d1d35386d0f65277d6c23ad3df32d3d3df9d | 2022-01-05T06:25:10.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mimi | null | mimi/Waynehills-NLP-mimi | 0 | null | transformers | 35,665 | Entry not found |
minhdang241/TAPT_distillBERT | 54a60b55d32e0869bdee932386dcdc3c3a92e8c9 | 2021-04-25T23:07:02.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | minhdang241 | null | minhdang241/TAPT_distillBERT | 0 | null | transformers | 35,666 | Entry not found |
minhdang241/robustqa-baseline-01 | 762db47f1a7ed3c5b3e360eb31ba1d0faaa3ba55 | 2021-04-27T02:04:01.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | minhdang241 | null | minhdang241/robustqa-baseline-01 | 0 | null | transformers | 35,667 | Entry not found |
minimaxir/ai-generated-pokemon-rudalle | d8e77e518fe802646a5d2c2541e56641a08d1cec | 2022-01-15T01:41:47.000Z | [
"pytorch",
"en",
"rudalle",
"pokemon",
"image-generation",
"license:mit"
] | null | false | minimaxir | null | minimaxir/ai-generated-pokemon-rudalle | 0 | 9 | null | 35,668 | ---
language:
- en
tags:
- rudalle
- pokemon
- image-generation
license: mit
---
# ai-generated-pokemon-rudalle

A finetuned [ruDALL-E](https://github.com/sberbank-ai/ru-dalle) on Pokémon using the finetuning example Colab Notebook [linked in that repo](https://colab.research.google.com/drive/1Tb7J4PvvegWOybPfUubl5O7m5I24CBg5?usp=sharing). This model was used to create Pokémon that resulted in AI-Generated Pokémon that went viral ([10k+ retweets](https://twitter.com/minimaxir/status/1470913487085785089) on Twitter + [30k+ upvotes](https://www.reddit.com/r/pokemon/comments/rgmyxp/i_trained_an_ai_on_all_the_official_pokemon/) on Reddit)
The model used above was trained for 12 epochs (4.5 hours on a P100), at a max learning rate of `1e-5`.
## Demo
You can play with this model using [this Colab Notebook](https://colab.research.google.com/drive/1A3t2gQofQGeXo5z1BAr1zqYaqVg3czKd?usp=sharing).
## License
MIT
|
minn/my-bert-mlm | e581e4193549b938c97c7d4d6398de631f29c3e6 | 2021-10-28T15:31:19.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | minn | null | minn/my-bert-mlm | 0 | null | transformers | 35,669 | Entry not found |
minsiam/DialoGPT-medium-harrypotterbot | 41ae2953704a338964317cfaa588d5d7fb75157a | 2021-09-20T07:52:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | minsiam | null | minsiam/DialoGPT-medium-harrypotterbot | 0 | null | transformers | 35,670 | ---
tags:
- conversational
---
#Harry Potter DialoGPT-medium Model |
minwoo/myelectra-small-discriminator | 344b2a12b96c3d1517f274c15238b147b7c4fcb6 | 2020-07-25T10:29:44.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | minwoo | null | minwoo/myelectra-small-discriminator | 0 | null | transformers | 35,671 | Entry not found |
mishig/tiny-layoutlmv2-tokenclassification | 5c340f7e0716d226a8de96de4ca87ba037543a22 | 2021-09-23T14:46:34.000Z | [
"pytorch"
] | null | false | mishig | null | mishig/tiny-layoutlmv2-tokenclassification | 0 | null | null | 35,672 | Entry not found |
mittalnishit/DialoGPT-medium-rickman2 | 9652b97680269861d03f5dc1991a58d290ad85c3 | 2021-06-23T08:34:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mittalnishit | null | mittalnishit/DialoGPT-medium-rickman2 | 0 | null | transformers | 35,673 | ---
tags:
- conversational
---
# DialoGPT-medium-rickman2 |
mjstamper/DialoGPT-small-samwise | 648820ab6d4e74ed4328d92af04ec73bbd487838 | 2021-08-27T21:54:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mjstamper | null | mjstamper/DialoGPT-small-samwise | 0 | null | transformers | 35,674 | ---
tags:
- conversational
---
# Samwise Gamgee DialoGPT Model |
mk3smo/dialogpt-med-duckfinal | 6e1592fadd0bf72207ea7641e73bbde7c8d68a14 | 2021-12-31T18:22:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mk3smo | null | mk3smo/dialogpt-med-duckfinal | 0 | null | transformers | 35,675 | ---
tags:
- conversational
---
# yeah |
mklucifer/DialoGPT-small-DEADPOOL | fc917f6679cf77e3ba695606f5394a155783888a | 2021-10-27T14:25:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mklucifer | null | mklucifer/DialoGPT-small-DEADPOOL | 0 | null | transformers | 35,676 | ---
tags:
- conversational
---
# DEADPOOL DialoGPT Model |
mm/roberta-base-mld | 7734edaf3598d0dc382d32a442b005c6ac35147e | 2021-05-20T17:54:53.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | mm | null | mm/roberta-base-mld | 0 | null | transformers | 35,677 | # roberta-base-mld
This is a pretrained roberta-base model for machine learning domain documents.
|
mohamed-illiyas/wav2vec2-300m-lj-demo-colab | b9b49299c2b31a9d46251956581d065abe504e00 | 2022-02-22T03:59:55.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | mohamed-illiyas | null | mohamed-illiyas/wav2vec2-300m-lj-demo-colab | 0 | null | transformers | 35,678 | Entry not found |
mohammedks713/DialoGPT-small-harrypotter | fbe4e285da643defaa69b402433533419a06dfed | 2021-09-06T04:57:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mohammedks713 | null | mohammedks713/DialoGPT-small-harrypotter | 0 | null | transformers | 35,679 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
mohammedks713/DialoGPT-small-jonsnow | 71de5f4cab87a7a711b42090efca0a80a66dc36f | 2021-09-06T05:57:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mohammedks713 | null | mohammedks713/DialoGPT-small-jonsnow | 0 | null | transformers | 35,680 | ---
tags:
- conversational
---
# Jon Snow DialoGPT Model |
mohsenfayyaz/BERT_Warmup | 1f68cdd0bb7e2369caa2924b0f628c46ad3f0460 | 2021-03-15T10:54:28.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | mohsenfayyaz | null | mohsenfayyaz/BERT_Warmup | 0 | null | transformers | 35,681 | Entry not found |
molly-hayward/bioelectra-base-discriminator | adbb2ada0be242bc9d817a8e8c6511465ef196cf | 2021-04-17T16:59:46.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
] | null | false | molly-hayward | null | molly-hayward/bioelectra-base-discriminator | 0 | null | transformers | 35,682 | To produce BioELECTRA, we pretrain ELECTRA on a corpus of over 20 million abstracts from PubMed.
How to use the discriminator in transformers:
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("molly-hayward/bioelectra-base-discriminator")
tokenizer = ElectraTokenizerFast.from_pretrained("molly-hayward/bioelectra-base-discriminator") |
molly-hayward/bioelectra-base-generator | 4838743f1ab9f08911435159aff13585e82676d5 | 2021-04-17T16:59:28.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
] | null | false | molly-hayward | null | molly-hayward/bioelectra-base-generator | 0 | null | transformers | 35,683 | To produce BioELECTRA, we pretrain ELECTRA on a corpus of over 20 million abstracts from PubMed.
How to use the generator in transformers:
from transformers import ElectraForMaskedLM, ElectraTokenizerFast
import torch
generator = ElectraForMaskedLM.from_pretrained("molly-hayward/bioelectra-base-generator")
tokenizer = ElectraTokenizerFast.from_pretrained("molly-hayward/bioelectra-base-generator") |
molly-hayward/bioelectra-small-discriminator | 32a125d83287a7e2d90f889363b00e1accb886e4 | 2021-04-17T16:58:44.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
] | null | false | molly-hayward | null | molly-hayward/bioelectra-small-discriminator | 0 | null | transformers | 35,684 | To produce BioELECTRA, we pretrain ELECTRA on a corpus of over 20 million abstracts from PubMed.
How to use the discriminator in transformers:
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("molly-hayward/bioelectra-small-discriminator")
tokenizer = ElectraTokenizerFast.from_pretrained("molly-hayward/bioelectra-small-discriminator") |
molly-hayward/bioelectra-small-generator | 336833d84d466c3afacb3e143d7aa9e59d757089 | 2021-04-17T16:58:15.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
] | null | false | molly-hayward | null | molly-hayward/bioelectra-small-generator | 0 | null | transformers | 35,685 | To produce BioELECTRA, we pretrain ELECTRA on a corpus of over 20 million abstracts from PubMed.
How to use the generator in transformers:
from transformers import ElectraForMaskedLM, ElectraTokenizerFast
import torch
generator = ElectraForMaskedLM.from_pretrained("molly-hayward/bioelectra-small-generator")
tokenizer = ElectraTokenizerFast.from_pretrained("molly-hayward/bioelectra-small-generator") |
momo/gpt2-kiosk | 366824c536e14445c65968f5b60017bf937ccc9f | 2021-11-19T07:42:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | momo | null | momo/gpt2-kiosk | 0 | null | transformers | 35,686 | # kiosk_bot
KoGPT2를 이용한 간단한 kiosk chatbot 입니다.
데이터는 AiHub의 [한국어대화 데이터](https://aihub.or.kr/aidata/85)를 사용했습니다.
데이터는 학습만 진행하였고 공개는 하지 않습니다.
## Architecture
Hugging face의 예제들을 보며 구현하였습니다.
<img width="549" alt="gpt" src="https://user-images.githubusercontent.com/60643542/142431681-85db3d74-172d-45f0-9433-de43a8aeae17.png">
```input``` 은 ```User + <BOS> + System + <EOS>```로 구현하였다.
```output``` 은 ```P(system|user)```를 최대화 시키기위해 모델을 학습하였다.
## Install
```
git clone https://github.com/momozzing/kiosk_bot.git
pip install requirements.txt
```
## How to train?
기존 학습된 데이터로 대화를 하고 싶으시면 ```install``` 후 ```How to chat``` 으로 넘어가셔도 됩니다.
```
cd kiosk_bot/data ## 안에 원하시는 데이터를 넣으시면 됩니다.
kiosk_bot/GPT-2_fine_tune.py ## 안에 데이터 loader들의 파일이름을 변경해주시면 됩니다.
deepspeed --num_gpus=1 GPT-2_fine_tune.py ## 변경후 이것으로 실행하시면 됩니다.
그후 생성된 skt-kogpt2-base-v2.pt 를 사용하시면 됩니다.
```
## How to chat?
```
python interactive.py ##기존 데이터로 채팅하기
python new_data_interactive.py ##새로운 데이터로 파인튜닝한것 pt파일 불러와서 채팅하기.
--기존 데이터 채팅--
User: 아메리카노 주세요
System: 따뜻한 거 드릴까요?
User: 아이스로 주세요
System: 사이즈는 톨 괜찮으세요?
User: 그란데 사이즈로 주세요
System: 죄송한데 저희 가게에는 레귤러랑 라지 사이즈 밖에 없어요
User: 그럼 라지사이즈로 주세요
System: 네 고객님 테이크아웃 하시나요?
User: 먹고갈게요
System: 네 그럼 머그잔으로 준비해드리겠습니다
User: 감사합니다
System: 좋은 하루 보내세요
```
## Reference
[HuggingFace](https://huggingface.co/transformers/index.html)
[KoGPT2](https://github.com/SKT-AI/KoGPT2)
[AIHUB](https://aihub.or.kr/)
|
monologg/kocharelectra-base-modu-ner-sx | 067e1489571903d85152b2676516aeef083b66f4 | 2020-12-02T23:49:27.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | monologg | null | monologg/kocharelectra-base-modu-ner-sx | 0 | null | transformers | 35,687 | Entry not found |
monologg/koelectra-small-v3-goemotions | b70df691bc610b31dacd6189a370d35b2b58d8df | 2021-02-09T14:41:12.000Z | [
"pytorch",
"electra",
"transformers"
] | null | false | monologg | null | monologg/koelectra-small-v3-goemotions | 0 | null | transformers | 35,688 | Entry not found |
morganchen1007/layoutlmv2-finetuned-ORIfunsd-1216 | 25d80a3c099ace231d68d18efdfb658f9a439b4b | 2021-12-16T09:30:29.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | morganchen1007 | null | morganchen1007/layoutlmv2-finetuned-ORIfunsd-1216 | 0 | null | transformers | 35,689 | Entry not found |
morganchen1007/layoutlmv2-finetuned-funsd-test | eca83981b12969d80f4b68b60417d2b93522c222 | 2021-12-09T07:58:31.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | morganchen1007 | null | morganchen1007/layoutlmv2-finetuned-funsd-test | 0 | null | transformers | 35,690 | Entry not found |
motiondew/bert-finetuned-lr2-e5-b16-ep2 | 82bf4d3184ddf6bae4c7a66626aa4f681c58f768 | 2021-06-22T17:55:12.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | motiondew | null | motiondew/bert-finetuned-lr2-e5-b16-ep2 | 0 | null | transformers | 35,691 | Entry not found |
motiondew/bert-sd1-small | 2900c27e5fe592bf36cf57df4da1851efe44995a | 2021-07-01T09:20:36.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | motiondew | null | motiondew/bert-sd1-small | 0 | null | transformers | 35,692 | Entry not found |
mra1ster/DialoGPT_scully_small | 733233b6d892bd406e3f51d0af762e812b4e3bc3 | 2021-10-09T06:38:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mra1ster | null | mra1ster/DialoGPT_scully_small | 0 | null | transformers | 35,693 | ---
tags:
- conversational
---
#Scully from XFiles DialoGPT model |
mrm8488/GuaPeTe-2-tiny-finetuned-TED | 0831fe9190a5b770a21938ca5e615f7f0251ee70 | 2021-05-23T10:14:53.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"es",
"transformers",
"spanish",
"gpt-2",
"spanish gpt2"
] | text-generation | false | mrm8488 | null | mrm8488/GuaPeTe-2-tiny-finetuned-TED | 0 | null | transformers | 35,694 |
---
language: es
tags:
- spanish
- gpt-2
- spanish gpt2
widget:
- text: "Ustedes tienen la oportunidad de"
---
# GuaPeTe-2-tiny fine-tuned on TED dataset for CLM |
mrm8488/albert-base-v2-finetuned-mnli-pabee | b78ead58cf2cd49272b7afeab3bc1507137f9348 | 2020-07-10T22:51:35.000Z | [
"pytorch",
"albert",
"transformers"
] | null | false | mrm8488 | null | mrm8488/albert-base-v2-finetuned-mnli-pabee | 0 | null | transformers | 35,695 | Entry not found |
mrm8488/bert2bert-small_shared-question-generation | 9e5b50f3ac94da170c4fb75ec6efc81409c84fff | 2020-12-26T12:28:08.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/bert2bert-small_shared-question-generation | 0 | null | transformers | 35,696 | Entry not found |
mrm8488/bert2bert_shared-portuguese-question-generation | 5362cd0cdb479a5927e50a7304bda2b77afb9d78 | 2020-12-12T18:30:18.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/bert2bert_shared-portuguese-question-generation | 0 | null | transformers | 35,697 | Entry not found |
mrm8488/codebert-finetuned-clone-detection | 80a21cef60d7680225b31398f0305e6e53aa8d0f | 2021-05-20T18:22:42.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | mrm8488 | null | mrm8488/codebert-finetuned-clone-detection | 0 | 1 | transformers | 35,698 | Entry not found |
mrm8488/electra-large-finetuned-squadv1 | cc57d67e664a3116b3a8498f4a9a72744c71c1ca | 2020-07-01T10:16:16.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/electra-large-finetuned-squadv1 | 0 | null | transformers | 35,699 | Entry not found |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.