modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
beatajackowska/DialoGPT-RickBot | dcd05a25a1094e6d2c1ee8527c551e0897bdf3ef | 2021-08-31T21:28:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | beatajackowska | null | beatajackowska/DialoGPT-RickBot | 1 | null | transformers | 28,700 | ---
tags:
- conversational
---
RICK!!! |
benajtil/DialoGPT-small-Daddyben | 8f54a93a1b7f827805df49c62f5bafacdf3b0854 | 2022-01-30T13:15:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | benajtil | null | benajtil/DialoGPT-small-Daddyben | 1 | null | transformers | 28,701 | ---
tags:
- conversational
---
# DaddyBen DialoGPT Model |
benajtil/DialoGPT-small-RickAndMortyScripts | e8a5c449f665ff6d9bc2376e3b3cd27e72afcb97 | 2022-01-28T12:46:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | benajtil | null | benajtil/DialoGPT-small-RickAndMortyScripts | 1 | null | transformers | 28,702 | ---
tags:
- conversational
---
# Rick And Morty Scripts DialoGPT Model |
benjamin/gpt2-wechsel-swahili | 2f0b3dd5febbad85ec27adbf82f4efadf3d10182 | 2022-07-13T23:43:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"sw",
"transformers",
"license:mit"
] | text-generation | false | benjamin | null | benjamin/gpt2-wechsel-swahili | 1 | null | transformers | 28,703 | ---
language: sw
license: mit
---
# gpt2-wechsel-swahili
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
benny6/roberta-tydiqa | b27940efd3f40fe8d410ee25d6407ff3a02b2303 | 2021-05-24T12:19:00.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | benny6 | null | benny6/roberta-tydiqa | 1 | null | transformers | 28,704 | Entry not found |
beomi/exKcBERT-paws-extonly | f87a388fac41b05f66b6ac31428931096e5550c9 | 2021-06-14T06:35:28.000Z | [
"pytorch",
"exbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | beomi | null | beomi/exKcBERT-paws-extonly | 1 | null | transformers | 28,705 | Entry not found |
beomi/exKcBERT-paws | 4836852d76d2712023d013ea81d2a2792bb79399 | 2021-06-10T16:21:09.000Z | [
"pytorch",
"exbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | beomi | null | beomi/exKcBERT-paws | 1 | null | transformers | 28,706 | Entry not found |
bestminerevah/DialoGPT-small-thetenthdoctor | d9650787c8622972debaf7492f2f3fa1b614cf94 | 2021-08-29T12:42:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | bestminerevah | null | bestminerevah/DialoGPT-small-thetenthdoctor | 1 | null | transformers | 28,707 | ---
tags:
- conversational
---
# The Tenth Doctor DialoGPT Model |
beyhan/checkpoint-3750 | 58025f6ee67a21ce919e09f337ba99b075182249 | 2021-05-19T12:38:52.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | beyhan | null | beyhan/checkpoint-3750 | 1 | null | transformers | 28,708 | Entry not found |
bhan/distilbert-base-uncased-finetuned-squad | f8f4da6bbf9132cb1a40ea83e2902d753de39c8b | 2022-01-04T19:20:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | bhan | null | bhan/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 28,709 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 8 | 5.8757 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.11.0
|
bhavya689/DialoGPT-large-chandler | e100a0df8c95b69037f92f681828e1748029351a | 2021-11-13T16:30:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | bhavya689 | null | bhavya689/DialoGPT-large-chandler | 1 | null | transformers | 28,710 | ---
tags:
- conversational
---
#Chandler DialoGPT model |
bigjoedata/obama-gpt2-sm | dfd46f1861e0d28d9cd79cce69ac013d6685cbb9 | 2021-05-21T14:14:23.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | bigjoedata | null | bigjoedata/obama-gpt2-sm | 1 | null | transformers | 28,711 | Entry not found |
bigjoedata/trump-gpt2-sm | a5c5133acadc925471a314c8b8a7c15773892c3b | 2021-05-21T14:21:14.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | bigjoedata | null | bigjoedata/trump-gpt2-sm | 1 | null | transformers | 28,712 | Entry not found |
birgermoell/swedish-common-voice-vox-voxpopuli | 419d47450339d3ca6b838479f8f98eb4a7c1f040 | 2021-07-05T23:02:25.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"et",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/swedish-common-voice-vox-voxpopuli | 1 | null | transformers | 28,713 | ---
language: et
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: common-voice-vox-populi-swedish by Birger Moell
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice Vox Populi Swedish
type: common_voice
args: et
metrics:
- name: Test WER
type: wer
value: 36.951816
---
# common-voice-vox-populi-swedish
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Swedish using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/birgermoell/common-voice-vox-populi-swedish")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/common-voice-vox-populi-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Swedish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sv-SE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/common-voice-vox-populi-swedish")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/common-voice-vox-populi-swedish")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\twith torch.no_grad():
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\\\\\\\\\\\\\\\\\\\
```
**Test Result**:
WER: 22.684600
|
birgermoell/wav2vec2-large-xlsr-finnish | 08790a5917eae8fc5332b396fbf433ba07bbef63 | 2021-07-05T23:13:42.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/wav2vec2-large-xlsr-finnish | 1 | 0 | transformers | 28,714 | ---
language: fi
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Finnish by Birger Moell
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fi
type: common_voice
args: fi
metrics:
- name: Test WER
type: wer
value: 55.097365
---
# Wav2Vec2-Large-XLSR-53-Finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Finnish using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fi", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlsr-finnish")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlsr-finnish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Finnish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlsr-finnish")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlsr-finnish")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
The WER is 55.097365
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found here
https://colab.research.google.com/drive/16AyzqMWU_aWNe3IA-NxrhskB1WLPHG-Q?usp=sharing
|
birgermoell/wav2vec2-luganda | 05ddec5963ef53b8bb48186c0491d4be836d2f0f | 2021-07-05T23:22:11.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"lg",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/wav2vec2-luganda | 1 | 1 | transformers | 28,715 | ---
language: lg
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Luganda by Birger Moell
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice Luganda
type: common_voice
args: lg
metrics:
- name: Test WER
type: wer
value: 48.31
---
# Wav2Vec2-Large-XLSR-53-Luganda
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Luganda using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lg", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-luganda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Luganda test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-luganda")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\\\\\\\\\\\\\\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\\\\\\\\\\\\\\\twith torch.no_grad():
\\\\\\\\\\\\\\\\t\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\\\\\\\\\\\\\\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\\\\\\\\\\\\\\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
WER: 48.314356
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found here
https://colab.research.google.com/drive/1ZeII36LZ5IpBrTV7kBaTVfhDqygznlmC?usp=sharing
|
bmdonnell/DialoGPT-medium-harrypotter | 5887f6153ec6e9185f4b80ee2eb10001b440cbea | 2021-08-28T04:56:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | bmdonnell | null | bmdonnell/DialoGPT-medium-harrypotter | 1 | null | transformers | 28,716 | ---
tags:
- conversational
---
# Harry Potter Bot |
boydster/DialoGPT-small-gollum | 90e3ff721b95cba131806f93095234f17090066a | 2021-10-02T19:48:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | boydster | null | boydster/DialoGPT-small-gollum | 1 | null | transformers | 28,717 | ---
tags:
- conversational
---
# Gollum DialoGPT Model |
brimeggi/testbot2 | 5f6267f43f1867590fc29035a7da9b5e763226e5 | 2021-08-13T13:16:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | brimeggi | null | brimeggi/testbot2 | 1 | null | transformers | 28,718 | ---
tags:
- conversational
---
# RickBot built for [Chai](https://chai.ml/)
Make your own [here](https://colab.research.google.com/drive/1o5LxBspm-C28HQvXN-PRQavapDbm5WjG?usp=sharing)
|
britama/DialoGPT-small-psycho | 25fbcf2c85e621f603d4202cb1f7627afabd763b | 2021-08-30T01:53:02.000Z | [
"pytorch"
] | null | false | britama | null | britama/DialoGPT-small-psycho | 1 | null | null | 28,719 | Entry not found |
briverse/vi-electra-small-cased | 7e81769eaba4b162899b38f1a17564d113f68a75 | 2021-02-04T15:19:22.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | briverse | null | briverse/vi-electra-small-cased | 1 | null | transformers | 28,720 | Entry not found |
cahya/output | 593378dda97b1eca9b3593cc9875ad65af8f06d0 | 2022-02-01T15:40:45.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cahya | null | cahya/output | 1 | null | transformers | 28,721 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [cahya/wav2vec2-base-turkish-artificial-cv](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial-cv) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1822
- Wer: 0.1423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-07
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
cahya/wav2vec2-base-30h-290e | 7dcba5929f24534469596f2459e60c5a3306a6ec | 2021-07-05T23:37:40.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"transformers"
] | feature-extraction | false | cahya | null | cahya/wav2vec2-base-30h-290e | 1 | null | transformers | 28,722 | Entry not found |
cahya/wav2vec2-base-test | 1b0c10cfdcb98fedda99869878c5d1e2536fae9d | 2021-07-05T23:38:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-base-test | 1 | null | transformers | 28,723 | Entry not found |
cahya/wav2vec2-large-xlsr-breton | 1ea767a965ea1bc13ce655535b8115e3c5bf28bd | 2021-07-05T23:47:53.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"br",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-large-xlsr-breton | 1 | null | transformers | 28,724 | ---
language: br
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Breton by Cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice br
type: common_voice
args: br
metrics:
- name: Test WER
type: wer
value: 41.71
---
# Wav2Vec2-Large-XLSR-Breton
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Breton Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "br", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-breton")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-breton")
chars_to_ignore_regex = '[\\,\,\?\.\!\;\:\"\“\%\”\�\(\)\/\«\»\½\…]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
batch["sentence"] = batch["sentence"].replace("ʼ", "'")
batch["sentence"] = batch["sentence"].replace("’", "'")
batch["sentence"] = batch["sentence"].replace('‘', "'")
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
The above code leads to the following prediction for the first two samples:
```
Prediction: ["ne' ler ket don a-benn us netra pa vez zer nic'hed evel-si", 'an eil hag egile']
Reference: ['"n\'haller ket dont a-benn eus netra pa vezer nec\'het evel-se." ', 'an eil hag egile. ']
```
## Evaluation
The model can be evaluated as follows on the Breton test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "br", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-breton")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-breton")
model.to("cuda")
chars_to_ignore_regex = '[\\,\,\?\.\!\;\:\"\“\%\”\�\(\)\/\«\»\½\…]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
batch["sentence"] = batch["sentence"].replace("ʼ", "'")
batch["sentence"] = batch["sentence"].replace("’", "'")
batch["sentence"] = batch["sentence"].replace('‘', "'")
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 41.71 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
(will be available soon)
|
calebcsjm/distilgpt2-finetuned-wikitexts | 4189340d492ffe35b96b9cc592dff6d485e38579 | 2022-02-18T16:01:53.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | calebcsjm | null | calebcsjm/distilgpt2-finetuned-wikitexts | 1 | null | transformers | 28,725 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitexts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitexts
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cambridgeltl/mirrorwic-deberta-base | 710a900eb2b12a3dd8c7994d989065494cbb01ac | 2021-10-25T19:23:20.000Z | [
"pytorch",
"deberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | cambridgeltl | null | cambridgeltl/mirrorwic-deberta-base | 1 | null | transformers | 28,726 | Entry not found |
camille/bert-base-pruned-voc-esw0.1-40000-en-de-cased | b83ef53be88cdf256611b5c2683e6830831292cf | 2021-05-19T13:48:06.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | camille | null | camille/bert-base-pruned-voc-esw0.1-40000-en-de-cased | 1 | null | transformers | 28,727 | Entry not found |
camilodefelipe/t5_squad_v1_es | 78ce610c65e527887e61284db7d3beae6d231cf2 | 2021-11-21T15:57:11.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | camilodefelipe | null | camilodefelipe/t5_squad_v1_es | 1 | null | transformers | 28,728 | Entry not found |
cammy/bart-large-cnn-finetuned-weaksup-1000-earlystop | cdcc9a509738465f993e8a8383cf4e4c9ad616c8 | 2022-02-22T08:34:32.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-finetuned-weaksup-1000-earlystop | 1 | null | transformers | 28,729 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-weaksup-1000-earlystop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-1000-earlystop
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9095
- Rouge1: 27.9262
- Rouge2: 11.895
- Rougel: 21.4029
- Rougelsum: 24.7805
- Gen Len: 67.68
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.502 | 1.0 | 1000 | 1.7405 | 26.5705 | 11.4807 | 20.1226 | 23.6827 | 66.73 |
| 0.7337 | 2.0 | 2000 | 1.9095 | 27.9262 | 11.895 | 21.4029 | 24.7805 | 67.68 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cammy/bart-large-cnn-finetuned-weaksup-10000-pad-early | db439fe2795f1d0450b48595bc48b58b465b1dde | 2022-02-24T04:48:02.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-finetuned-weaksup-10000-pad-early | 1 | null | transformers | 28,730 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart-large-cnn-finetuned-weaksup-10000-pad-early
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-10000-pad-early
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3541
- eval_rouge1: 27.8229
- eval_rouge2: 12.9484
- eval_rougeL: 21.4909
- eval_rougeLsum: 24.7737
- eval_gen_len: 67.365
- eval_runtime: 1162.9446
- eval_samples_per_second: 0.86
- eval_steps_per_second: 0.86
- epoch: 2.0
- step: 20000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cammy/distilbart-cnn-12-6-finetuned-weaksup-1000 | cf2fe58469739f4f2fd095ba1fb0bf25c1f67d5b | 2022-02-22T08:49:00.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/distilbart-cnn-12-6-finetuned-weaksup-1000 | 1 | null | transformers | 28,731 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-finetuned-weaksup-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-weaksup-1000
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6818
- Rouge1: 25.9199
- Rouge2: 11.2697
- Rougel: 20.3598
- Rougelsum: 22.8242
- Gen Len: 66.44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.644 | 1.0 | 1000 | 1.6818 | 25.9199 | 11.2697 | 20.3598 | 22.8242 | 66.44 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
candra/gpt2-newgen-test | bf1fe2b04d0bce091fcfb941d3787d74add4b0e5 | 2021-12-17T07:53:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | candra | null | candra/gpt2-newgen-test | 1 | null | transformers | 28,732 | news generator dummy |
caps1994/DialoGPT-small-chrisbot-caps1994 | 25bf87bcaf2d6e1348f6d7a984b3599d7a5770f5 | 2021-09-08T23:37:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | caps1994 | null | caps1994/DialoGPT-small-chrisbot-caps1994 | 1 | null | transformers | 28,733 | ---
tags:
- conversational
---
#Chris DialoGPT Model |
cariai/medslabs | 19d5b787272001202406b92d5519d59d13163c83 | 2021-05-20T15:16:39.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | cariai | null | cariai/medslabs | 1 | null | transformers | 28,734 | Med Labs Cariai
|
carlosejimenez/wiki103_bert_small_final_e27 | c7be2adc49f3cc689781957fb4aeacd877f1e434 | 2021-12-14T16:56:06.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | carlosejimenez | null | carlosejimenez/wiki103_bert_small_final_e27 | 1 | null | transformers | 28,735 | Entry not found |
carlosejimenez/wiki103_bert_small_k1000_e27 | 8d01113973a805c531ce73a0d2ef501b986a0d26 | 2021-12-14T16:58:29.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | carlosejimenez | null | carlosejimenez/wiki103_bert_small_k1000_e27 | 1 | null | transformers | 28,736 | Entry not found |
carlosejimenez/wiki103_bert_small_k10_e27 | 6cc29c9f4b50fc278938f6b4e644dc6999b13f8c | 2021-12-14T17:00:13.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | carlosejimenez | null | carlosejimenez/wiki103_bert_small_k10_e27 | 1 | null | transformers | 28,737 | Entry not found |
carlosejimenez/wiki103_bert_small_visual_only_e27 | e9cc5b0726559652a166edd434f11acc469bb184 | 2021-12-14T17:09:02.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | carlosejimenez | null | carlosejimenez/wiki103_bert_small_visual_only_e27 | 1 | null | transformers | 28,738 | Entry not found |
chaitanya97/custom_german | 0e3557b3978c5075930092096a491bc08c539e23 | 2021-10-25T16:27:15.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chaitanya97 | null | chaitanya97/custom_german | 1 | null | transformers | 28,739 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: custom_german
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# custom_german
This model is a fine-tuned version of [flozi00/wav2vec-xlsr-german](https://huggingface.co/flozi00/wav2vec-xlsr-german) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6832
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 8.7718 | 5.0 | 5 | 8.5148 | 1.0 |
| 3.7125 | 10.0 | 10 | 5.4304 | 1.0 |
| 2.7679 | 15.0 | 15 | 5.0388 | 1.0 |
| 2.0516 | 20.0 | 20 | 4.4628 | 1.0 |
| 1.6702 | 25.0 | 25 | 4.5341 | 1.0 |
| 1.515 | 30.0 | 30 | 4.6832 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
chaitanya97/german_pretrained | fe43d7289d4c2263fa14bb113f90425754c18cb9 | 2021-10-26T13:35:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chaitanya97 | null | chaitanya97/german_pretrained | 1 | null | transformers | 28,740 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: german_pretrained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german_pretrained
This model is a fine-tuned version of [flozi00/wav2vec-xlsr-german](https://huggingface.co/flozi00/wav2vec-xlsr-german) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9812
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 12.5229 | 5.0 | 5 | 12.9520 | 1.0 |
| 4.3782 | 10.0 | 10 | 5.5689 | 1.0 |
| 2.56 | 15.0 | 15 | 4.8410 | 1.0 |
| 2.2895 | 20.0 | 20 | 4.0380 | 1.0 |
| 1.872 | 25.0 | 25 | 3.9558 | 1.0 |
| 1.6992 | 30.0 | 30 | 3.9812 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
chaitanya97/wav2vec2-large-xls-r-300m-turkish-colab | cf08f75e505d1c05061639d49c09697bd8ce16a0 | 2022-02-16T10:38:44.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chaitanya97 | null | chaitanya97/wav2vec2-large-xls-r-300m-turkish-colab | 1 | null | transformers | 28,741 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 33.1265
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 21.4247 | 4.0 | 4 | 33.1265 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews-batch8-LR1 | be7a317c73283bbbc8b1d724fd74d4675af6455a | 2021-12-04T11:37:31.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chandank | null | chandank/bart-base-finetuned-kaggglenews-batch8-LR1 | 1 | null | transformers | 28,742 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-kaggglenews-batch8-LR1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8-LR1
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.6826 | 27.5191 | 15.0672 | 23.3065 | 24.7163 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews-batch8-LR2E6 | 08a19dcd56e1c13f398689db43226717765b8304 | 2021-12-04T12:07:12.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chandank | null | chandank/bart-base-finetuned-kaggglenews-batch8-LR2E6 | 1 | null | transformers | 28,743 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-kaggglenews-batch8-LR2E6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8-LR2E6
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.7971 | 26.6141 | 13.9957 | 22.3012 | 23.7509 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews-batch8-LR4 | 68798f12cdbb12a37a474bb42769e08120c7bfb2 | 2021-12-04T11:53:34.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chandank | null | chandank/bart-base-finetuned-kaggglenews-batch8-LR4 | 1 | null | transformers | 28,744 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-kaggglenews-batch8-LR4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8-LR4
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.6037 | 28.1247 | 15.9399 | 23.8676 | 25.3739 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews-batch8-epochs10 | 56c938fc69a06c150ec26516c5412a758f967bf9 | 2021-12-02T12:42:51.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chandank | null | chandank/bart-base-finetuned-kaggglenews-batch8-epochs10 | 1 | null | transformers | 28,745 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-kaggglenews-batch8-epochs10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8-epochs10
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5763
- Rouge1: 28.693
- Rouge2: 16.666
- Rougel: 24.2361
- Rougelsum: 26.0289
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.6043 | 27.8611 | 15.8713 | 23.8365 | 25.378 | 20.0 |
| 1.9054 | 2.0 | 990 | 1.5613 | 28.2715 | 16.3724 | 24.3212 | 25.8499 | 20.0 |
| 1.651 | 3.0 | 1485 | 1.5394 | 28.6282 | 16.2976 | 24.2336 | 25.9434 | 20.0 |
| 1.4955 | 4.0 | 1980 | 1.5438 | 28.9266 | 16.7257 | 24.61 | 26.443 | 20.0 |
| 1.4034 | 5.0 | 2475 | 1.5449 | 28.2296 | 16.1292 | 23.9698 | 25.651 | 20.0 |
| 1.3077 | 6.0 | 2970 | 1.5642 | 28.4486 | 16.3833 | 24.1629 | 26.0013 | 20.0 |
| 1.2505 | 7.0 | 3465 | 1.5566 | 28.5469 | 16.5374 | 24.2966 | 25.962 | 20.0 |
| 1.2027 | 8.0 | 3960 | 1.5730 | 28.7278 | 16.6442 | 24.2531 | 26.1171 | 20.0 |
| 1.1571 | 9.0 | 4455 | 1.5690 | 28.7736 | 16.7491 | 24.3066 | 26.1439 | 20.0 |
| 1.1237 | 10.0 | 4950 | 1.5763 | 28.693 | 16.666 | 24.2361 | 26.0289 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews-batch8-epochs3 | d371633a46dfbf0551e6b82823e2b75132b1e075 | 2021-12-02T15:10:13.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chandank | null | chandank/bart-base-finetuned-kaggglenews-batch8-epochs3 | 1 | null | transformers | 28,746 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-kaggglenews-batch8-epochs3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8-epochs3
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5635
- Rouge1: 28.2335
- Rouge2: 16.0201
- Rougel: 24.0315
- Rougelsum: 25.647
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.5635 | 28.2335 | 16.0201 | 24.0315 | 25.647 | 20.0 |
| 1.5345 | 2.0 | 990 | 1.5635 | 28.2335 | 16.0201 | 24.0315 | 25.647 | 20.0 |
| 1.531 | 3.0 | 1485 | 1.5635 | 28.2335 | 16.0201 | 24.0315 | 25.647 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews-fact-corrector-II | 6aaaa650ef1a0ad7a65dd967e20939dbe2d6fb23 | 2021-12-05T20:22:09.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chandank | null | chandank/bart-base-finetuned-kaggglenews-fact-corrector-II | 1 | null | transformers | 28,747 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-kaggglenews-fact-corrector-II
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-fact-corrector-II
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 305 | 1.5749 | 27.9313 | 15.1004 | 23.3282 | 25.2336 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews | f21b3e66b163732739037e4823532ff412ae0e42 | 2021-10-26T16:04:05.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chandank | null | chandank/bart-base-finetuned-kaggglenews | 1 | null | transformers | 28,748 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-kaggglenews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6240
- Rouge1: 28.3618
- Rouge2: 15.9828
- Rougel: 24.078
- Rougelsum: 25.565
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:|
| 1.9433 | 1.0 | 989 | 1.6240 | 28.3618 | 15.9828 | 24.078 | 25.565 | 20.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-xsum | b0cecf8e08b63a1f7f139f5828c2cf105bdcd5f2 | 2021-08-23T20:21:52.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | false | chandank | null | chandank/bart-base-finetuned-xsum | 1 | null | transformers | 28,749 | ---
tags:
- generated_from_trainer
datasets:
- null
metrics:
- rouge
model_index:
- name: bart-base-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metric:
name: Rouge1
type: rouge
value: 27.887
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-xsum
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5925
- Rouge1: 27.887
- Rouge2: 16.1414
- Rougel: 24.0525
- Rougelsum: 25.4029
- Gen Len: 19.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:-------:|:---------:|:-------:|
| 1.9826 | 1.0 | 879 | 1.5925 | 27.887 | 16.1414 | 24.0525 | 25.4029 | 19.9841 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
charsiu/en_w2v2_fc_10ms_32k | 613d79f3cc7a7b96e8f82a1bda69b626a62371aa | 2021-10-03T14:24:55.000Z | [
"pytorch",
"wav2vec2",
"transformers"
] | null | false | charsiu | null | charsiu/en_w2v2_fc_10ms_32k | 1 | null | transformers | 28,750 | Entry not found |
charsiu/en_w2v2_fc_20ms | 41ae65b77e09407f8678700223b04d696c42e46f | 2021-10-03T14:29:02.000Z | [
"pytorch",
"wav2vec2",
"transformers"
] | null | false | charsiu | null | charsiu/en_w2v2_fc_20ms | 1 | 2 | transformers | 28,751 | Entry not found |
charsiu/en_w2v2_fs_20ms | dd618a5ce51ab44ad564c25b051b784f60e01d0a | 2021-10-04T15:25:15.000Z | [
"pytorch",
"wav2vec2",
"transformers"
] | null | false | charsiu | null | charsiu/en_w2v2_fs_20ms | 1 | null | transformers | 28,752 | Entry not found |
chatdemoiselle/MedMTEVAL_baseline | aea90e6dbf6e6eadd7221fe9cd24ffb7767808e0 | 2022-02-13T10:32:25.000Z | [
"pytorch"
] | null | false | chatdemoiselle | null | chatdemoiselle/MedMTEVAL_baseline | 1 | null | null | 28,753 | ---
language:
- ru
- en
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: contest_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# contest_train
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-en](https://huggingface.co/Helsinki-NLP/opus-mt-ru-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4420
- Bleu: 67.6003
- Gen Len: 35.605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
chopey/testmntdv | 2d6831b1affa00dbe91e51b8f60be2740a042c19 | 2021-12-02T02:48:18.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | chopey | null | chopey/testmntdv | 1 | null | transformers | 28,754 | Test English-Dhivehi/Dhivehi-English NMT
Would need a lot more data to get accurate translations. |
chujiezheng/DialoGPT-small-ESC | 58e5714ade24602c0624a031ab53a43b6e12eb67 | 2021-08-13T01:16:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"arxiv:2106.01144",
"transformers"
] | text-generation | false | chujiezheng | null | chujiezheng/DialoGPT-small-ESC | 1 | null | transformers | 28,755 | [DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small) fine-tuned on [Emotional Support Conversation](https://arxiv.org/pdf/2106.01144.pdf) dataset |
cjrowe/afriberta_base-finetuned-tydiqa | 288e8d1c0b352d786c0255cdedc49d9eceddbaea | 2021-12-17T18:21:22.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"sw",
"dataset:tydiqa",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | cjrowe | null | cjrowe/afriberta_base-finetuned-tydiqa | 1 | null | transformers | 28,756 | ---
language:
- sw
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: afriberta_base-finetuned-tydiqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta_base-finetuned-tydiqa
This model is a fine-tuned version of [castorini/afriberta_base](https://huggingface.co/castorini/afriberta_base) on the tydiqa dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 192 | 2.1359 |
| No log | 2.0 | 384 | 2.3409 |
| 0.8353 | 3.0 | 576 | 2.3728 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
cl-nagoya/defsent-bert-large-uncased-cls | 32cee7905696d87fc8f9f366efc40c26b75f3fe8 | 2021-08-05T05:46:49.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cl-nagoya | null | cl-nagoya/defsent-bert-large-uncased-cls | 1 | null | transformers | 28,757 | Entry not found |
cl-nagoya/defsent-bert-large-uncased-mean | 39b2785b292c9812d3eb29ba2d97572e5baf4784 | 2021-08-05T05:47:20.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cl-nagoya | null | cl-nagoya/defsent-bert-large-uncased-mean | 1 | null | transformers | 28,758 | Entry not found |
cl-nagoya/defsent-roberta-large-cls | 6497379f77dff6681ee76feaf71ec395599f57f9 | 2021-08-05T05:48:41.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cl-nagoya | null | cl-nagoya/defsent-roberta-large-cls | 1 | null | transformers | 28,759 | Entry not found |
classla/bcms-bertic-geo | d8806c3b7593f9311d5ae3210832bd315fd418f2 | 2021-02-20T06:46:06.000Z | [
"pytorch",
"electra",
"transformers"
] | null | false | classla | null | classla/bcms-bertic-geo | 1 | null | transformers | 28,760 | Entry not found |
classla/bert-base-german-dbmdz-uncased-geo | 2f627ee4d7390fa9a51f54ff7887ffe9bd9c312b | 2021-05-19T14:23:58.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | classla | null | classla/bert-base-german-dbmdz-uncased-geo | 1 | null | transformers | 28,761 | Entry not found |
classla/swissbert-geo | eeaefd2099ad978b298c3f429cac203245b0d796 | 2021-05-19T14:24:21.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | classla | null | classla/swissbert-geo | 1 | null | transformers | 28,762 | Entry not found |
clayfox/DialoGPT-small-Hiccup | 00286f311a0dc106c16762af3c759df11d43def4 | 2021-11-28T16:23:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | clayfox | null | clayfox/DialoGPT-small-Hiccup | 1 | null | transformers | 28,763 | ---
tags:
- conversational
---
# HiccupBot DialoGPT Model |
cling371/modeling_test | 59db277d7597dc90f41601ea6dfde2050042ecae | 2021-06-11T07:43:33.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cling371 | null | cling371/modeling_test | 1 | null | transformers | 28,764 | Entry not found |
coldfir3/xlm-roberta-base-finetuned-panx-fr | dbffa33fbe475a76e687b171fd723b424b9608f2 | 2022-01-02T18:49:32.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | coldfir3 | null | coldfir3/xlm-roberta-base-finetuned-panx-fr | 1 | null | transformers | 28,765 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8354854938789199
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2651
- F1: 0.8355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5954 | 1.0 | 191 | 0.3346 | 0.7975 |
| 0.2689 | 2.0 | 382 | 0.2900 | 0.8347 |
| 0.1821 | 3.0 | 573 | 0.2651 | 0.8355 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
colochoplay/DialoGTP-small-harrypotter | 97ec6668d35574eea11e4539d409de7b69f1df91 | 2021-09-06T03:31:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | colochoplay | null | colochoplay/DialoGTP-small-harrypotter | 1 | null | transformers | 28,766 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
comacrae/roberta-eda-and-parav3 | 4630a7a5d5cf9b9950813a479a7975f82963f9ef | 2022-02-22T23:41:46.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | comacrae | null | comacrae/roberta-eda-and-parav3 | 1 | null | transformers | 28,767 | Entry not found |
comacrae/roberta-edav3 | c7b62484564b8b34ab5eea76c0639df64fbd03ab | 2022-02-22T22:30:37.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | comacrae | null | comacrae/roberta-edav3 | 1 | null | transformers | 28,768 | Entry not found |
comacrae/roberta-unaugv3 | 61eb14d748799a6e1a37ef742e46804ee5580a7e | 2022-02-22T21:22:34.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | comacrae | null | comacrae/roberta-unaugv3 | 1 | null | transformers | 28,769 | Entry not found |
comodoro/wav2vec2-xls-r-300m-hsb-cv8 | 9f23e8f22cc0b5f55ea6ba28bbf2a76c33639659 | 2022-03-24T11:53:37.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hsb",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | comodoro | null | comodoro/wav2vec2-xls-r-300m-hsb-cv8 | 1 | null | transformers | 28,770 | ---
language:
- hsb
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- xlsr-fine-tuning-week
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: Upper Sorbian comodoro Wav2Vec2 XLSR 300M CV8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: hsb
metrics:
- name: Test WER
type: wer
value: 56.3
- name: Test CER
type: cer
value: 14.3
---
# Upper Sorbian wav2vec2-xls-r-300m-hsb-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9643
- Wer: 0.5037
- Cer: 0.1278
## Evaluation
The model can be evaluated using the attached `eval.py` script:
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-hsb-cv8 --dataset mozilla-foundation/common-voice_8_0 --split test --config hsb
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|
| 4.3121 | 19.35 | 1200 | 3.2059 | 1.0 | 1.0 |
| 2.6525 | 38.71 | 2400 | 1.1324 | 0.9387 | 0.3204 |
| 1.3644 | 58.06 | 3600 | 0.8767 | 0.8099 | 0.2271 |
| 1.093 | 77.42 | 4800 | 0.8739 | 0.7603 | 0.2090 |
| 0.9546 | 96.77 | 6000 | 0.8454 | 0.6983 | 0.1882 |
| 0.8554 | 116.13 | 7200 | 0.8197 | 0.6484 | 0.1708 |
| 0.775 | 135.48 | 8400 | 0.8452 | 0.6345 | 0.1681 |
| 0.7167 | 154.84 | 9600 | 0.8551 | 0.6241 | 0.1631 |
| 0.6609 | 174.19 | 10800 | 0.8442 | 0.5821 | 0.1531 |
| 0.616 | 193.55 | 12000 | 0.8892 | 0.5864 | 0.1527 |
| 0.5815 | 212.9 | 13200 | 0.8839 | 0.5772 | 0.1503 |
| 0.55 | 232.26 | 14400 | 0.8905 | 0.5665 | 0.1436 |
| 0.5173 | 251.61 | 15600 | 0.8995 | 0.5471 | 0.1417 |
| 0.4969 | 270.97 | 16800 | 0.8633 | 0.5325 | 0.1334 |
| 0.4803 | 290.32 | 18000 | 0.9074 | 0.5253 | 0.1352 |
| 0.4596 | 309.68 | 19200 | 0.9159 | 0.5146 | 0.1294 |
| 0.4415 | 329.03 | 20400 | 0.9055 | 0.5189 | 0.1314 |
| 0.434 | 348.39 | 21600 | 0.9435 | 0.5208 | 0.1314 |
| 0.4199 | 367.74 | 22800 | 0.9199 | 0.5136 | 0.1290 |
| 0.4008 | 387.1 | 24000 | 0.9342 | 0.5174 | 0.1303 |
| 0.4051 | 406.45 | 25200 | 0.9436 | 0.5132 | 0.1292 |
| 0.3861 | 425.81 | 26400 | 0.9417 | 0.5084 | 0.1283 |
| 0.3738 | 445.16 | 27600 | 0.9573 | 0.5079 | 0.1299 |
| 0.3768 | 464.52 | 28800 | 0.9682 | 0.5062 | 0.1289 |
| 0.3647 | 483.87 | 30000 | 0.9643 | 0.5037 | 0.1278 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
comodoro/wav2vec2-xls-r-300m-pl-cv8 | eedf150688ee1985ca9b716f353e3a575098d25f | 2022-03-24T11:52:06.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pl",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | comodoro | null | comodoro/wav2vec2-xls-r-300m-pl-cv8 | 1 | null | transformers | 28,771 | ---
language:
- pl
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- xlsr-fine-tuning-week
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: Polish comodoro Wav2Vec2 XLSR 300M CV8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: pl
metrics:
- name: Test WER
type: wer
value: 17.0
- name: Test CER
type: cer
value: 3.8
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pl
metrics:
- name: Test WER
type: wer
value: 38.97
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pl
metrics:
- name: Test WER
type: wer
value: 46.05
---
# wav2vec2-xls-r-300m-pl-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset.
It achieves the following results on the evaluation set while training:
- Loss: 0.1716
- Wer: 0.1697
- Cer: 0.0385
The `eval.py` script results are:
WER: 0.16970531733661967
CER: 0.03839135416519316
## Model description
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Polish using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "pl", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-pl-cv8")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-pl-cv8")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated using the attached `eval.py` script:
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-pl-cv8 --dataset mozilla-foundation/common-voice_8_0 --split test --config pl
```
## Training and evaluation data
The Common Voice 8.0 `train` and `validation` datasets were used for training
## Training procedure
### Training hyperparameters
The following hyperparameters were used:
- learning_rate: 1e-4
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 1
- total_train_batch_size: 640
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 150
- mixed_precision_training: Native AMP
The training was interrupted after 3250 steps.
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
comodoro/wav2vec2-xls-r-300m-sr-cv8 | 75ce1f7e6f27eec4f668398eaf534969b1577977 | 2022-03-24T11:53:39.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sr",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | comodoro | null | comodoro/wav2vec2-xls-r-300m-sr-cv8 | 1 | null | transformers | 28,772 | ---
language:
- sr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- xlsr-fine-tuning-week
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
- name: Serbian comodoro Wav2Vec2 XLSR 300M CV8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: sr
metrics:
- name: Test WER
type: wer
value: 48.5
- name: Test CER
type: cer
value: 18.4
model-index:
- name: wav2vec2-xls-r-300m-sr-cv8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: sr
metrics:
- name: Test WER
type: wer
value: 48.53
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sr
metrics:
- name: Test WER
type: wer
value: 97.43
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: sr
metrics:
- name: Test WER
type: wer
value: 96.69
---
# Serbian wav2vec2-xls-r-300m-sr-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7302
- Wer: 0.4825
- Cer: 0.1847
Evaluation on mozilla-foundation/common_voice_8_0 gave the following results:
- WER: 0.48530097993467103
- CER: 0.18413288165227845
Evaluation on speech-recognition-community-v2/dev_data gave the following results:
- WER: 0.9718373107518604
- CER: 0.8302740620263108
The model can be evaluated using the attached `eval.py` script:
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-sr-cv8 --dataset mozilla-foundation/common-voice_8_0 --split test --config sr
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 800
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 5.6536 | 15.0 | 1200 | 2.9744 | 1.0 | 1.0 |
| 2.7935 | 30.0 | 2400 | 1.6613 | 0.8998 | 0.4670 |
| 1.6538 | 45.0 | 3600 | 0.9248 | 0.6918 | 0.2699 |
| 1.2446 | 60.0 | 4800 | 0.9151 | 0.6452 | 0.2398 |
| 1.0766 | 75.0 | 6000 | 0.9110 | 0.5995 | 0.2207 |
| 0.9548 | 90.0 | 7200 | 1.0273 | 0.5921 | 0.2149 |
| 0.8919 | 105.0 | 8400 | 0.9929 | 0.5646 | 0.2117 |
| 0.8185 | 120.0 | 9600 | 1.0850 | 0.5483 | 0.2069 |
| 0.7692 | 135.0 | 10800 | 1.1001 | 0.5394 | 0.2055 |
| 0.7249 | 150.0 | 12000 | 1.1018 | 0.5380 | 0.1958 |
| 0.6786 | 165.0 | 13200 | 1.1344 | 0.5114 | 0.1941 |
| 0.6432 | 180.0 | 14400 | 1.1516 | 0.5054 | 0.1905 |
| 0.6009 | 195.0 | 15600 | 1.3149 | 0.5324 | 0.1991 |
| 0.5773 | 210.0 | 16800 | 1.2468 | 0.5124 | 0.1903 |
| 0.559 | 225.0 | 18000 | 1.2186 | 0.4956 | 0.1922 |
| 0.5298 | 240.0 | 19200 | 1.4483 | 0.5333 | 0.2085 |
| 0.5136 | 255.0 | 20400 | 1.2871 | 0.4802 | 0.1846 |
| 0.4824 | 270.0 | 21600 | 1.2891 | 0.4974 | 0.1885 |
| 0.4669 | 285.0 | 22800 | 1.3283 | 0.4942 | 0.1878 |
| 0.4511 | 300.0 | 24000 | 1.4502 | 0.5002 | 0.1994 |
| 0.4337 | 315.0 | 25200 | 1.4714 | 0.5035 | 0.1911 |
| 0.4221 | 330.0 | 26400 | 1.4971 | 0.5124 | 0.1962 |
| 0.3994 | 345.0 | 27600 | 1.4473 | 0.5007 | 0.1920 |
| 0.3892 | 360.0 | 28800 | 1.3904 | 0.4937 | 0.1887 |
| 0.373 | 375.0 | 30000 | 1.4971 | 0.4946 | 0.1902 |
| 0.3657 | 390.0 | 31200 | 1.4208 | 0.4900 | 0.1821 |
| 0.3559 | 405.0 | 32400 | 1.4648 | 0.4895 | 0.1835 |
| 0.3476 | 420.0 | 33600 | 1.4848 | 0.4946 | 0.1829 |
| 0.3276 | 435.0 | 34800 | 1.5597 | 0.4979 | 0.1873 |
| 0.3193 | 450.0 | 36000 | 1.7329 | 0.5040 | 0.1980 |
| 0.3078 | 465.0 | 37200 | 1.6379 | 0.4937 | 0.1882 |
| 0.3058 | 480.0 | 38400 | 1.5878 | 0.4942 | 0.1921 |
| 0.2987 | 495.0 | 39600 | 1.5590 | 0.4811 | 0.1846 |
| 0.2931 | 510.0 | 40800 | 1.6001 | 0.4825 | 0.1849 |
| 0.276 | 525.0 | 42000 | 1.7388 | 0.4942 | 0.1918 |
| 0.2702 | 540.0 | 43200 | 1.7037 | 0.4839 | 0.1866 |
| 0.2619 | 555.0 | 44400 | 1.6704 | 0.4755 | 0.1840 |
| 0.262 | 570.0 | 45600 | 1.6042 | 0.4751 | 0.1865 |
| 0.2528 | 585.0 | 46800 | 1.6402 | 0.4821 | 0.1865 |
| 0.2442 | 600.0 | 48000 | 1.6693 | 0.4886 | 0.1862 |
| 0.244 | 615.0 | 49200 | 1.6203 | 0.4765 | 0.1792 |
| 0.2388 | 630.0 | 50400 | 1.6829 | 0.4830 | 0.1828 |
| 0.2362 | 645.0 | 51600 | 1.8100 | 0.4928 | 0.1888 |
| 0.2224 | 660.0 | 52800 | 1.7746 | 0.4932 | 0.1899 |
| 0.2218 | 675.0 | 54000 | 1.7752 | 0.4946 | 0.1901 |
| 0.2201 | 690.0 | 55200 | 1.6775 | 0.4788 | 0.1844 |
| 0.2147 | 705.0 | 56400 | 1.7085 | 0.4844 | 0.1851 |
| 0.2103 | 720.0 | 57600 | 1.7624 | 0.4848 | 0.1864 |
| 0.2101 | 735.0 | 58800 | 1.7213 | 0.4783 | 0.1835 |
| 0.1983 | 750.0 | 60000 | 1.7452 | 0.4848 | 0.1856 |
| 0.2015 | 765.0 | 61200 | 1.7525 | 0.4872 | 0.1869 |
| 0.1969 | 780.0 | 62400 | 1.7443 | 0.4844 | 0.1852 |
| 0.2043 | 795.0 | 63600 | 1.7302 | 0.4825 | 0.1847 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cosmic/DialoGPT-Rick | 699671b7d04a160441b2235d0f68d3b0255b55a3 | 2021-10-11T17:29:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | cosmic | null | cosmic/DialoGPT-Rick | 1 | null | transformers | 28,773 | ---
tags:
- conversational
---
# Rick Sanchez |
cowTodd/adalm-cs-small | 4606ff7e5e0970b8cf54b83d6dbb2fb8013b9efb | 2021-09-18T06:16:38.000Z | [
"pytorch",
"transformers"
] | null | false | cowTodd | null | cowTodd/adalm-cs-small | 1 | null | transformers | 28,774 | Entry not found |
cpierse/wav2vec2-large-xlsr-53-esperanto | 3ff4b48db24592341ea7cc8930da9a1b172b4930 | 2021-07-06T00:44:08.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"eo",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cpierse | null | cpierse/wav2vec2-large-xlsr-53-esperanto | 1 | 1 | transformers | 28,775 | ---
language: eo
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Esperanto by Charles Pierse
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice eo
type: common_voice
args: eo
metrics:
- name: Test WER
type: wer
value: 12.31
---
# Wav2Vec2-Large-XLSR-53-eo
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on esperanto using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "eo", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto")
model = Wav2Vec2ForCTC.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Esperanto test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import jiwer
def chunked_wer(targets, predictions, chunk_size=None):
if chunk_size is None: return jiwer.wer(targets, predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
test_dataset = load_dataset("common_voice", "eo", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto")
model = Wav2Vec2ForCTC.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\„\«\(\»\)\’\']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * chunked_wer(predictions=result["pred_strings"], targets=result["sentence"],chunk_size=2000)))
```
**Test Result**: 12.31 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
cpierse/wav2vec2-large-xlsr-53-irish | 6b48d1416d6f22334b1505773c1c5da54c7d5a25 | 2021-07-06T00:48:34.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ga-IE",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cpierse | null | cpierse/wav2vec2-large-xlsr-53-irish | 1 | null | transformers | 28,776 | ---
language: ga-IE
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: cpierse/wav2vec2-large-xlsr-53-irish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ga-IE
type: common_voice
args: ga-IE
metrics:
- name: Test WER
type: wer
value: 43.06
---
# Wav2Vec2-Large-XLSR-53-Irish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Irish using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ga-IE", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("cpierse/wav2vec2-large-xlsr-53-irish")
model = Wav2Vec2ForCTC.from_pretrained("cpierse/wav2vec2-large-xlsr-53-irish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Irish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ga-IE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cpierse/wav2vec2-large-xlsr-53-irish")
model = Wav2Vec2ForCTC.from_pretrained("cpierse/wav2vec2-large-xlsr-53-irish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\„\«\(\»\)\’\']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 43.06 %
|
crabz/bertoslav-limited | 06ee6edcd493ad5847335ece65359f554d08a6df | 2022-03-06T12:29:08.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | crabz | null | crabz/bertoslav-limited | 1 | 1 | transformers | 28,777 | ---
inference: false
--- |
creat89/NER_FEDA_Sl | fd8348f5a1581630840b3682c732615d44b9bd61 | 2022-04-13T09:32:36.000Z | [
"pytorch",
"bert",
"hr",
"sl",
"en",
"transformers",
"CroSloEngual",
"ner",
"license:mit"
] | null | false | creat89 | null | creat89/NER_FEDA_Sl | 1 | null | transformers | 28,778 | ---
license: mit
language:
- hr
- sl
- en
tags:
- CroSloEngual
- ner
---
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on CroSloEngual (https://huggingface.co/EMBEDDIA/crosloengual-bert) and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
3. SSJ500k (LOC, MISC, ORG, PER)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date
You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA). |
creynier/wav2vec2-base-swbd-small-turn-eos-2 | 1eda59d746dc2081196e4e91ebaafc74f11bcf4a | 2022-01-29T10:39:31.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-small-turn-eos-2 | 1 | null | transformers | 28,779 | Entry not found |
creynier/wav2vec2-base-swbd-turn-small-2 | 16a8479e8fe4fdf5d2fa4c32a3450f095d505520 | 2022-02-14T16:00:38.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-turn-small-2 | 1 | null | transformers | 28,780 | Entry not found |
cristinakuo/wav2vec-timit | f9db7d8d252dc0ca266464159e12b8350eeaf464 | 2021-12-12T22:48:35.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cristinakuo | null | cristinakuo/wav2vec-timit | 1 | null | transformers | 28,781 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-timit
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
crystalgate/DialoGPT-small-rick | 2619f73cbede7949db461741b6315a05a34e9db9 | 2022-01-05T17:17:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | crystalgate | null | crystalgate/DialoGPT-small-rick | 1 | null | transformers | 28,782 | ---
tags:
- conversational
---
#Rick Dialogpt model |
csbongga/Machi-QAG-01 | 9f8feadc873775d94af9528f62119b85d4593115 | 2022-02-23T02:43:55.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | csbongga | null | csbongga/Machi-QAG-01 | 1 | null | transformers | 28,783 | Entry not found |
csbongga/Machi-QAG-02 | b7e8ec13ee9329757cf6cc4b876839e8e148ecb4 | 2022-02-23T03:18:57.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | csbongga | null | csbongga/Machi-QAG-02 | 1 | null | transformers | 28,784 | Entry not found |
csikasote/wav2vec2-large-xls-r-300m-bemba-fds | 8868faafde9bf5e2b01ba3cda1594b4976e5cd0c | 2022-02-10T07:21:29.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"bem",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | csikasote | null | csikasote/wav2vec2-large-xls-r-300m-bemba-fds | 1 | null | transformers | 28,785 | ---
license: apache-2.0
tags:
- generated_from_trainer
- bem
- robust-speech-event
model-index:
- name: wav2vec2-large-xls-r-300m-bemba-fds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bemba-fds
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [BembaSpeech](https://github.com/csikasote/BembaSpeech) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3594
- Wer: 0.3838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9961 | 0.67 | 500 | 0.5157 | 0.7133 |
| 0.5903 | 1.34 | 1000 | 0.3663 | 0.4989 |
| 0.4804 | 2.02 | 1500 | 0.3547 | 0.4653 |
| 0.4146 | 2.69 | 2000 | 0.3274 | 0.4345 |
| 0.3792 | 3.36 | 2500 | 0.3586 | 0.4640 |
| 0.3509 | 4.03 | 3000 | 0.3360 | 0.4316 |
| 0.3114 | 4.7 | 3500 | 0.3382 | 0.4303 |
| 0.2935 | 5.38 | 4000 | 0.3263 | 0.4091 |
| 0.2723 | 6.05 | 4500 | 0.3348 | 0.4175 |
| 0.2502 | 6.72 | 5000 | 0.3317 | 0.4147 |
| 0.2334 | 7.39 | 5500 | 0.3542 | 0.4030 |
| 0.2287 | 8.06 | 6000 | 0.3594 | 0.4067 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
cumtowndiscord/DialoGPT-small-joshua | 8bd1906d558da1fd911fa8f1e047a40565062184 | 2022-02-04T16:25:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | cumtowndiscord | null | cumtowndiscord/DialoGPT-small-joshua | 1 | null | transformers | 28,786 | ---
tags:
- conversational
---
# My Awesome Model
|
cuongtran/RobertaTextSummarization | 194706b58c1f6c48b34e435debc43b4eac07ea79 | 2021-09-29T14:52:45.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cuongtran | null | cuongtran/RobertaTextSummarization | 1 | null | transformers | 28,787 | Entry not found |
d4rk/harry | d77d1fb5b78ce4f43c620bb29161af8019f7b7d0 | 2021-12-02T11:04:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | d4rk | null | d4rk/harry | 1 | null | transformers | 28,788 | ---
tags:
- conversational
---
# Harry |
danhsf/t5-small-finetuned-en-to-pt | c9a6dab85f1152cf20da4c99b12bebe0ef5617b5 | 2022-01-23T00:38:04.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | danhsf | null | danhsf/t5-small-finetuned-en-to-pt | 1 | null | transformers | 28,789 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-pt
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3295
- Bleu: 5.6807
- Gen Len: 18.6772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 0.5787 | 1.0 | 6250 | 0.4928 | 4.1007 | 18.638 |
| 0.5089 | 2.0 | 12500 | 0.4463 | 4.3492 | 18.663 |
| 0.4652 | 3.0 | 18750 | 0.4215 | 4.68 | 18.6652 |
| 0.4353 | 4.0 | 25000 | 0.3980 | 4.8172 | 18.6708 |
| 0.4042 | 5.0 | 31250 | 0.3799 | 4.9719 | 18.6514 |
| 0.3734 | 6.0 | 37500 | 0.3676 | 5.2226 | 18.6572 |
| 0.3396 | 7.0 | 43750 | 0.3513 | 5.2693 | 18.6596 |
| 0.308 | 8.0 | 50000 | 0.3400 | 5.4546 | 18.676 |
| 0.2767 | 9.0 | 56250 | 0.3331 | 5.5649 | 18.6708 |
| 0.2424 | 10.0 | 62500 | 0.3295 | 5.6807 | 18.6772 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
danhsf/t5-small-finetuned-en-to-ro-lr_2e-3-fp_false | 776494c2e66dd823bcfe70520d8fc68b21563b96 | 2021-12-03T09:19:34.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | danhsf | null | danhsf/t5-small-finetuned-en-to-ro-lr_2e-3-fp_false | 1 | null | transformers | 28,790 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-lr_2e-3-fp_false
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.1921
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-lr_2e-3-fp_false
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4239
- Bleu: 7.1921
- Gen Len: 18.2611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 0.8922 | 0.05 | 2000 | 1.7000 | 6.5274 | 18.2656 |
| 0.8621 | 0.1 | 4000 | 1.6409 | 6.6411 | 18.2311 |
| 0.8433 | 0.16 | 6000 | 1.6396 | 6.6601 | 18.2596 |
| 0.8297 | 0.21 | 8000 | 1.6304 | 6.7129 | 18.2581 |
| 0.8006 | 0.26 | 10000 | 1.6022 | 6.6067 | 18.2816 |
| 0.793 | 0.31 | 12000 | 1.5999 | 6.551 | 18.2631 |
| 0.774 | 0.37 | 14000 | 1.5586 | 6.7105 | 18.2661 |
| 0.7618 | 0.42 | 16000 | 1.5769 | 6.7278 | 18.2526 |
| 0.7463 | 0.47 | 18000 | 1.5625 | 6.6972 | 18.2201 |
| 0.7394 | 0.52 | 20000 | 1.5377 | 6.936 | 18.2491 |
| 0.7203 | 0.58 | 22000 | 1.5191 | 7.0205 | 18.2731 |
| 0.7158 | 0.63 | 24000 | 1.5055 | 6.835 | 18.2506 |
| 0.688 | 0.68 | 26000 | 1.4779 | 7.0534 | 18.2716 |
| 0.678 | 0.73 | 28000 | 1.4691 | 6.9735 | 18.2616 |
| 0.6677 | 0.79 | 30000 | 1.4702 | 7.0359 | 18.2496 |
| 0.6568 | 0.84 | 32000 | 1.4534 | 6.9982 | 18.2556 |
| 0.6475 | 0.89 | 34000 | 1.4427 | 7.0443 | 18.2466 |
| 0.6395 | 0.94 | 36000 | 1.4265 | 7.1205 | 18.2721 |
| 0.6319 | 1.0 | 38000 | 1.4239 | 7.1921 | 18.2611 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
danielbispov/t5-small-finetuned-fi-to-en | a2da35e7c09e66f02fa57703f07f273a50449b38 | 2021-12-05T16:40:52.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt19",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | danielbispov | null | danielbispov/t5-small-finetuned-fi-to-en | 1 | null | transformers | 28,791 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt19
metrics:
- bleu
model-index:
- name: t5-small-finetuned-fi-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt19
type: wmt19
args: fi-en
metrics:
- name: Bleu
type: bleu
value: 1.129
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-fi-to-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt19 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5235
- Bleu: 1.129
- Gen Len: 17.088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:-------:|
| 3.414 | 1.0 | 6250 | 3.5235 | 1.129 | 17.088 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
danielbubiola/daniel_asr | 935e03d571638e5778c6c3123fc885f83789d0fc | 2022-01-24T05:30:03.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | danielbubiola | null | danielbubiola/daniel_asr | 1 | null | transformers | 28,792 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: daniel_asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daniel_asr
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4565
- Wer: 0.3423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4909 | 4.0 | 500 | 1.3485 | 0.8887 |
| 0.5887 | 8.0 | 1000 | 0.4957 | 0.4641 |
| 0.2207 | 12.0 | 1500 | 0.4621 | 0.3971 |
| 0.125 | 16.0 | 2000 | 0.4339 | 0.3756 |
| 0.0829 | 20.0 | 2500 | 0.4618 | 0.3613 |
| 0.0601 | 24.0 | 3000 | 0.4564 | 0.3535 |
| 0.0456 | 28.0 | 3500 | 0.4565 | 0.3423 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
danny481/DialoGPT-small-harrypotter | 0be0dd1b34dd9722f7a68367df0997f17a5aebd8 | 2021-12-27T23:56:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | danny481 | null | danny481/DialoGPT-small-harrypotter | 1 | null | transformers | 28,793 | ---
tags:
- conversational
---
#Harry Potter DialoGPT
|
danny911kr/calm-base | 818034e7e496b340193aa72de06a947b48a08d68 | 2021-09-16T07:16:16.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | danny911kr | null | danny911kr/calm-base | 1 | null | transformers | 28,794 | ## CALM
This model is for ICLR2021 paper: [Pre-training Text-to-Text Transformers for Concept-centric Common Sense](https://openreview.net/forum?id=3k20LAiHYL2).
Checkout our [Project website](https://inklab.usc.edu/calm-project) for details!
```bibtex
@inproceedings{CALM2021,
title={Pre-training Text-to-Text Transformers for Concept-centric Common Sense},
author={Wangchunshu Zhou and Dong-Ho Lee and Ravi Kiran Selvam and Seyeon Lee and Bill Yuchen Lin and Xiang Ren},
booktitle={ICLR},
year={2021}
}
``` |
danny911kr/calm-large | a2d2d6d86565cecdc6c09871f90547ff0b3c84d3 | 2021-09-16T07:16:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | danny911kr | null | danny911kr/calm-large | 1 | null | transformers | 28,795 | ## CALM
This model is for ICLR2021 paper: [Pre-training Text-to-Text Transformers for Concept-centric Common Sense](https://openreview.net/forum?id=3k20LAiHYL2).
Checkout our [Project website](https://inklab.usc.edu/calm-project) for details!
```bibtex
@inproceedings{CALM2021,
title={Pre-training Text-to-Text Transformers for Concept-centric Common Sense},
author={Wangchunshu Zhou and Dong-Ho Lee and Ravi Kiran Selvam and Seyeon Lee and Bill Yuchen Lin and Xiang Ren},
booktitle={ICLR},
year={2021}
}
``` |
danny911kr/calm-mix-base | 040c857b1e0283d52b7c1654f7fede0c1908daf7 | 2021-09-16T07:20:42.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | danny911kr | null | danny911kr/calm-mix-base | 1 | null | transformers | 28,796 | ## CALM
This model is for ICLR2021 paper: [Pre-training Text-to-Text Transformers for Concept-centric Common Sense](https://openreview.net/forum?id=3k20LAiHYL2).
Checkout our [Project website](https://inklab.usc.edu/calm-project) for details!
```bibtex
@inproceedings{CALM2021,
title={Pre-training Text-to-Text Transformers for Concept-centric Common Sense},
author={Wangchunshu Zhou and Dong-Ho Lee and Ravi Kiran Selvam and Seyeon Lee and Bill Yuchen Lin and Xiang Ren},
booktitle={ICLR},
year={2021}
}
``` |
danurahul/Eddie_neo_1.3train | b1fd303fc4ea549ea5749fa185b38ca29ca513f9 | 2021-06-17T14:06:29.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | danurahul | null | danurahul/Eddie_neo_1.3train | 1 | null | transformers | 28,797 | Entry not found |
danurahul/Eddie_neo_j11 | a559f6df662574e294ab1db02b5753cbcef9c3a1 | 2021-06-17T06:30:42.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | danurahul | null | danurahul/Eddie_neo_j11 | 1 | null | transformers | 28,798 | Entry not found |
danurahul/alex-gpt-finetune | cb6650480607ece4d35e57b52d0b52f77a07c09d | 2021-05-21T15:16:14.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | danurahul | null | danurahul/alex-gpt-finetune | 1 | null | transformers | 28,799 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.