modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
anton-l/wav2vec2-large-xlsr-53-slovenian | 9801c0c603f87bd0f2ad0504ba16966f2984571e | 2021-07-05T20:36:02.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"sl",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/wav2vec2-large-xlsr-53-slovenian | 3 | 0 | transformers | 21,100 | ---
language: sl
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Slovenian XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice sl
type: common_voice
args: sl
metrics:
- name: Test WER
type: wer
value: 36.04
---
# Wav2Vec2-Large-XLSR-53-Slovenian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Slovenian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sl", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-slovenian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-slovenian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Slovenian test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/sl.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-slovenian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-slovenian")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/sl/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/sl/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 36.04 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
anuragshas/wav2vec2-large-xls-r-300m-ha-cv8 | 9ffa6b7518b7b96b14c22f6290d733cfd83f0bed | 2022-03-24T11:57:39.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ha",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xls-r-300m-ha-cv8 | 3 | null | transformers | 21,101 | ---
language:
- ha
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
model-index:
- name: XLS-R-300M - Hausa
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice 8
args: ha
metrics:
- type: wer
value: 36.295
name: Test WER
- name: Test CER
type: cer
value: 11.073
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Hausa
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6094
- Wer: 0.5234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9599 | 6.56 | 400 | 2.8650 | 1.0 |
| 2.7357 | 13.11 | 800 | 2.7377 | 0.9951 |
| 1.3012 | 19.67 | 1200 | 0.6686 | 0.7111 |
| 1.0454 | 26.23 | 1600 | 0.5686 | 0.6137 |
| 0.9069 | 32.79 | 2000 | 0.5576 | 0.5815 |
| 0.82 | 39.34 | 2400 | 0.5502 | 0.5591 |
| 0.7413 | 45.9 | 2800 | 0.5970 | 0.5586 |
| 0.6872 | 52.46 | 3200 | 0.5817 | 0.5428 |
| 0.634 | 59.02 | 3600 | 0.5636 | 0.5314 |
| 0.6022 | 65.57 | 4000 | 0.5780 | 0.5229 |
| 0.5705 | 72.13 | 4400 | 0.6036 | 0.5323 |
| 0.5408 | 78.69 | 4800 | 0.6119 | 0.5336 |
| 0.5225 | 85.25 | 5200 | 0.6105 | 0.5270 |
| 0.5265 | 91.8 | 5600 | 0.6034 | 0.5231 |
| 0.5154 | 98.36 | 6000 | 0.6094 | 0.5234 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-ha-cv8 --dataset mozilla-foundation/common_voice_8_0 --config ha --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-ha-cv8"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ha", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "kakin hade ya ke da kyautar"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 47.821 | 36.295 | |
anuragshas/wav2vec2-large-xls-r-300m-mr | 32bb148770cac0f6f7e94a3a192906a4fe33de8e | 2022-03-24T11:55:19.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mr",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xls-r-300m-mr | 3 | 1 | transformers | 21,102 | ---
language:
- mr
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-mr
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice 8
args: mr
metrics:
- type: wer
value: 32.811
name: Test WER
- name: Test CER
type: cer
value: 7.692
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-mr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5479
- Wer: 0.5740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 3.7378 | 18.18 | 400 | 3.5047 | 1.0 |
| 3.1707 | 36.36 | 800 | 2.6166 | 0.9912 |
| 1.4942 | 54.55 | 1200 | 0.5778 | 0.6927 |
| 1.2058 | 72.73 | 1600 | 0.5168 | 0.6362 |
| 1.0558 | 90.91 | 2000 | 0.5105 | 0.6069 |
| 0.9488 | 109.09 | 2400 | 0.5151 | 0.6089 |
| 0.8588 | 127.27 | 2800 | 0.5157 | 0.5989 |
| 0.7991 | 145.45 | 3200 | 0.5179 | 0.5740 |
| 0.7545 | 163.64 | 3600 | 0.5348 | 0.5740 |
| 0.7144 | 181.82 | 4000 | 0.5518 | 0.5724 |
| 0.7041 | 200.0 | 4400 | 0.5479 | 0.5740 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-mr --dataset mozilla-foundation/common_voice_8_0 --config mr --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-mr"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "mr", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "या पानास लेखाचे स्वरूप यायला हावे"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 49.177 | 32.811 |
|
anuragshas/wav2vec2-large-xlsr-53-dv | bfceeb41ed692550d1a1ab4d17b0a1402598631a | 2021-07-05T20:50:49.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"dv",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xlsr-53-dv | 3 | null | transformers | 21,103 | ---
language: dv
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Anurag Singh XLSR Wav2Vec2 Large 53 Dhivehi
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice dv
type: common_voice
args: dv
metrics:
- name: Test WER
type: wer
value: 55.68
---
# Wav2Vec2-Large-XLSR-53-Dhivehi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Dhivehi using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "dv", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-dv")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-dv")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Dhivehi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "dv", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-dv")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-dv")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\،\.\؟\–\'\’]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 55.68 %
## Training
The Common Voice `train` and `validation` datasets were used for training. |
anuragshas/wav2vec2-large-xlsr-53-rm-sursilv | cce58d37f62cf09c0b31edb3063aafa8c22c5cbd | 2021-07-05T21:14:18.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"rm-sursilv",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xlsr-53-rm-sursilv | 3 | null | transformers | 21,104 | ---
language: rm-sursilv
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Anurag Singh XLSR Wav2Vec2 Large 53 Romansh Sursilv
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice rm-sursilv
type: common_voice
args: rm-sursilv
metrics:
- name: Test WER
type: wer
value: 25.78
---
# Wav2Vec2-Large-XLSR-53-Romansh Sursilv
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Romansh Sursilv using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "rm-sursilv", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-rm-sursilv")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-rm-sursilv")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Romansh Sursilv test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "rm-sursilv", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-rm-sursilv")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-rm-sursilv")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\”\„\–\…\«\»]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 25.78 %
## Training
The Common Voice `train` and `validation` datasets were used for training. |
anuragshas/wav2vec2-large-xlsr-53-sah | a101728e836e51c8ef2443787f2f07e9489877e6 | 2021-07-05T21:26:28.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"sah",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xlsr-53-sah | 3 | null | transformers | 21,105 | ---
language: sah
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Anurag Singh XLSR Wav2Vec2 Large 53 Sakha
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice sah
type: common_voice
args: sah
metrics:
- name: Test WER
type: wer
value: 38.04
---
# Wav2Vec2-Large-XLSR-53-Sakha
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Sakha using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sah", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-sah")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-sah")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Sakha test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sah", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-sah")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-sah")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\”\„\–\…\«\»]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 38.04 %
## Training
The Common Voice `train` and `validation` datasets were used for training. |
anusha/t5-base-finetuned-wikiSQL-sql-to-en_1 | 99ee9dcb8d645823442a89d81fbb000b9f7cf572 | 2021-06-23T11:23:41.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | anusha | null | anusha/t5-base-finetuned-wikiSQL-sql-to-en_1 | 3 | null | transformers | 21,106 | Entry not found |
anusha/t5-base-finetuned-wikiSQL-sql-to-en_15i | 59167c7aa38261cd85af8bfb98fbdd2a54eec652 | 2021-06-23T11:25:27.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | anusha | null | anusha/t5-base-finetuned-wikiSQL-sql-to-en_15i | 3 | null | transformers | 21,107 | Entry not found |
aodiniz/bert_uncased_L-10_H-512_A-8_squad2 | 53c1ce069bbf57dc556729453e742e6ee58346ff | 2021-05-18T23:46:31.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-10_H-512_A-8_squad2 | 3 | null | transformers | 21,108 | Entry not found |
aodiniz/bert_uncased_L-2_H-128_A-2_squad2_covid-qna | 15cd77bc2a0245900ce8f6c1fb52ba4373d60c20 | 2021-05-18T23:48:37.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-2_H-128_A-2_squad2_covid-qna | 3 | null | transformers | 21,109 | Entry not found |
aodiniz/bert_uncased_L-4_H-256_A-4_cord19-200616 | bbb574bf4d0d4494e35f4fa2eef427f3b709f6d9 | 2021-05-18T23:50:55.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | aodiniz | null | aodiniz/bert_uncased_L-4_H-256_A-4_cord19-200616 | 3 | null | transformers | 21,110 | # BERT L-4 H-256 fine-tuned on MLM (CORD-19 2020/06/16)
BERT model with [4 Transformer layers and hidden embedding of size 256](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4), referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962), fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).
## Training the model
```bash
python run_language_modeling.py
--model_type bert
--model_name_or_path google/bert_uncased_L-4_H-256_A-4
--do_train
--train_data_file {cord19-200616-dataset}
--mlm
--mlm_probability 0.2
--line_by_line
--block_size 256
--per_device_train_batch_size 20
--learning_rate 3e-5
--num_train_epochs 2
--output_dir bert_uncased_L-4_H-256_A-4_cord19-200616
|
aodiniz/bert_uncased_L-4_H-256_A-4_cord19-200616_squad2_covid-qna | 51f727c0d10f38ea04055f0eca5d3d33d32b5270 | 2021-05-18T23:52:07.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-4_H-256_A-4_cord19-200616_squad2_covid-qna | 3 | null | transformers | 21,111 | Entry not found |
aodiniz/bert_uncased_L-4_H-512_A-8_cord19-200616_squad2 | 9c13ec92c6571bbecf98bfbc0416af37a3b95409 | 2021-05-18T23:53:41.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-4_H-512_A-8_cord19-200616_squad2 | 3 | null | transformers | 21,112 | Entry not found |
aodiniz/bert_uncased_L-4_H-768_A-12_cord19-200616_squad2_covid-qna | 2a08daeb321b10ac4697a0ef40e6a9f045221e02 | 2021-05-18T23:56:52.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-4_H-768_A-12_cord19-200616_squad2_covid-qna | 3 | null | transformers | 21,113 | Entry not found |
aodiniz/bert_uncased_L-4_H-768_A-12_squad2_covid-qna | 715a74fcd4576874767a5434212d9ef0d8a55810 | 2021-05-18T23:58:27.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-4_H-768_A-12_squad2_covid-qna | 3 | null | transformers | 21,114 | Entry not found |
arampacha/wav2vec2-large-xlsr-czech | 6059f5148736ef059c8394c60cb280e93d7bdd06 | 2021-07-05T21:59:41.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"cs",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | arampacha | null | arampacha/wav2vec2-large-xlsr-czech | 3 | null | transformers | 21,115 | ---
language: cs
dataset: common_voice
metrics: wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Czech XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice cs
type: common_voice
args: cs
metrics:
- name: Test WER
type: wer
value: 24.56
---
# Wav2Vec2-Large-XLSR-53-Chech
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cs", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("arampacha/wav2vec2-large-xlsr-czech")
model = Wav2Vec2ForCTC.from_pretrained("arampacha/wav2vec2-large-xlsr-czech")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Czech test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "cs", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("arampacha/wav2vec2-large-xlsr-czech")
model = Wav2Vec2ForCTC.from_pretrained("arampacha/wav2vec2-large-xlsr-czech")
model.to("cuda")
chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", '«', '»', '—', '…', '(', ')', '*', '”', '“']
chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
# Note: this models is trained ignoring accents on letters as below
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().strip()
batch["sentence"] = re.sub(re.compile('[äá]'), 'a', batch['sentence'])
batch["sentence"] = re.sub(re.compile('[öó]'), 'o', batch['sentence'])
batch["sentence"] = re.sub(re.compile('[èé]'), 'e', batch['sentence'])
batch["sentence"] = re.sub(re.compile("[ïí]"), 'i', batch['sentence'])
batch["sentence"] = re.sub(re.compile("[üů]"), 'u', batch['sentence'])
batch['sentence'] = re.sub(' ', ' ', batch['sentence'])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 24.56
## Training
The Common Voice `train`, `validation`.
The script used for training will be available [here](https://github.com/arampacha/hf-sprint-xlsr) soon. |
arawat/pegasus-custom-xsum | 8ab7f250e6f8c6b754c48c48dee794780a790be6 | 2021-11-26T10:26:28.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | arawat | null | arawat/pegasus-custom-xsum | 3 | null | transformers | 21,116 | ---
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ardatasc/miniMe-version1 | 07f19af2adb3ed2d12acf539c26db9c60c052a87 | 2021-09-09T11:01:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ardatasc | null | ardatasc/miniMe-version1 | 3 | null | transformers | 21,117 | ---
tags:
- conversational
---
#Mini-Me |
aretw0/t5-small-finetuned-en-to-ro-epoch.04375 | 4403b05b7b0f42c8f6a0ccf81c96b7d43669e1a4 | 2021-12-01T21:21:30.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | aretw0 | null | aretw0/t5-small-finetuned-en-to-ro-epoch.04375 | 3 | null | transformers | 21,118 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-epoch.04375
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3292
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-epoch.04375
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4137
- Bleu: 7.3292
- Gen Len: 18.2541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.04375
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6211 | 0.04 | 1669 | 1.4137 | 7.3292 | 18.2541 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
arjun3816/autonlp-sam_summarization1-15492651 | bba03676211c84b9c8a00d12a6e5c9b8bdc2d4bc | 2021-10-07T02:28:05.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"unk",
"dataset:arjun3816/autonlp-data-sam_summarization1",
"transformers",
"autonlp",
"autotrain_compatible"
] | text2text-generation | false | arjun3816 | null | arjun3816/autonlp-sam_summarization1-15492651 | 3 | null | transformers | 21,119 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- arjun3816/autonlp-data-sam_summarization1
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 15492651
## Validation Metrics
- Loss: 1.4060134887695312
- Rouge1: 50.9953
- Rouge2: 35.9204
- RougeL: 43.5673
- RougeLsum: 46.445
- Gen Len: 58.0193
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/arjun3816/autonlp-sam_summarization1-15492651
``` |
artemis13fowl/distilbert-base-uncased-finetuned-imdb | 32e5839ad021b10a500dd1fd07e2f3927973a5b7 | 2022-01-23T14:10:31.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | artemis13fowl | null | artemis13fowl/distilbert-base-uncased-finetuned-imdb | 3 | null | transformers | 21,120 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5756 | 2.0 | 314 | 2.4230 |
| 2.5395 | 3.0 | 471 | 2.4358 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
lmqg/t5-small-squad-no-paragraph | 03516fa3f74b35f3274801e1237fc28a22779821 | 2022-06-01T00:25:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-squad-no-paragraph | 3 | null | transformers | 21,121 | Entry not found |
asahi417/tner-roberta-large-multiconer-en | f5bc585a8bff437983ddfd7f82e4574fea9b15ec | 2022-01-26T23:00:39.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | asahi417 | null | asahi417/tner-roberta-large-multiconer-en | 3 | null | transformers | 21,122 | Entry not found |
asahi417/tner-xlm-roberta-large-multiconer-mix | 74c57743180cf4fe74091d33984ffd43f5fb66ba | 2022-01-26T22:59:43.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | asahi417 | null | asahi417/tner-xlm-roberta-large-multiconer-mix | 3 | null | transformers | 21,123 | Entry not found |
tner/xlm-roberta-large-panx-dataset-ar | cd55be3ae95b7d0ec00bacaf70c712fee1d8ba99 | 2021-02-13T00:04:41.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-large-panx-dataset-ar | 3 | null | transformers | 21,124 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ar")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ar")
``` |
aseifert/distilbert-base-german-cased-comma-derstandard | b4e91c19d3f4815178cc3641c824fb9b98ac2218 | 2021-04-07T19:18:06.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | aseifert | null | aseifert/distilbert-base-german-cased-comma-derstandard | 3 | 1 | transformers | 21,125 | Entry not found |
ashwinchandran13/DialoGPT-small-harrypotter | a392d90d3e6a90605ed02fcfa5bfba2039f4edcb | 2021-09-19T08:22:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ashwinchandran13 | null | ashwinchandran13/DialoGPT-small-harrypotter | 3 | null | transformers | 21,126 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
aviator-neural/mbart_jokes | c47c759b107b94b009d74d386cfbc3b333cc5a71 | 2022-01-20T14:31:08.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | aviator-neural | null | aviator-neural/mbart_jokes | 3 | null | transformers | 21,127 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mbart_jokes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart_jokes
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0282
## Model description
This model is trained of jokes dataset , where you can ask a question and the model gives funny answer.
## Intended uses & limitations
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3455 | 1.0 | 1914 | 3.0282 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
avioo1/roberta-base-squad2-finetuned-squad | 1649f2b179fa314e0249fe276f59fe1ed4f6f43c | 2021-09-29T11:55:18.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | avioo1 | null | avioo1/roberta-base-squad2-finetuned-squad | 3 | null | transformers | 21,128 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-base-squad2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2-finetuned-squad
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 74 | 1.7148 |
| No log | 2.0 | 148 | 1.6994 |
| No log | 3.0 | 222 | 1.7922 |
| No log | 4.0 | 296 | 1.9947 |
| No log | 5.0 | 370 | 2.0753 |
| No log | 6.0 | 444 | 2.2096 |
| 0.9547 | 7.0 | 518 | 2.3070 |
| 0.9547 | 8.0 | 592 | 2.6947 |
| 0.9547 | 9.0 | 666 | 2.7169 |
| 0.9547 | 10.0 | 740 | 2.8503 |
| 0.9547 | 11.0 | 814 | 3.1990 |
| 0.9547 | 12.0 | 888 | 3.4931 |
| 0.9547 | 13.0 | 962 | 3.6575 |
| 0.3191 | 14.0 | 1036 | 3.1863 |
| 0.3191 | 15.0 | 1110 | 3.7922 |
| 0.3191 | 16.0 | 1184 | 3.6336 |
| 0.3191 | 17.0 | 1258 | 4.1156 |
| 0.3191 | 18.0 | 1332 | 4.1353 |
| 0.3191 | 19.0 | 1406 | 3.9888 |
| 0.3191 | 20.0 | 1480 | 4.4290 |
| 0.1904 | 21.0 | 1554 | 4.0473 |
| 0.1904 | 22.0 | 1628 | 4.5048 |
| 0.1904 | 23.0 | 1702 | 4.4026 |
| 0.1904 | 24.0 | 1776 | 4.2864 |
| 0.1904 | 25.0 | 1850 | 4.3941 |
| 0.1904 | 26.0 | 1924 | 4.4921 |
| 0.1904 | 27.0 | 1998 | 4.9139 |
| 0.1342 | 28.0 | 2072 | 4.8914 |
| 0.1342 | 29.0 | 2146 | 5.0148 |
| 0.1342 | 30.0 | 2220 | 5.0220 |
### Framework versions
- Transformers 4.11.0
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
awvik360/DialoGPT-medium-plemons | 4e8ddf4cf19ed6d609985ce914b85f81e29e2cae | 2021-06-22T23:35:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | awvik360 | null | awvik360/DialoGPT-medium-plemons | 3 | null | transformers | 21,129 | ---
tags:
- conversational
---
# My Awesome Model |
ayameRushia/wav2vec2-large-xls-r-300m-ar | 9f94b88e943412f0b9f11eb4ef412c93bdb027fc | 2022-02-07T09:03:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ayameRushia | null | ayameRushia/wav2vec2-large-xls-r-300m-ar | 3 | null | transformers | 21,130 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ar
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4819
- Wer: 0.4244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 11.0435 | 0.67 | 400 | 4.3104 | 1.0 |
| 3.4451 | 1.34 | 800 | 3.1566 | 1.0 |
| 3.1399 | 2.01 | 1200 | 3.0532 | 0.9990 |
| 2.8538 | 2.68 | 1600 | 1.6994 | 0.9238 |
| 1.7195 | 3.35 | 2000 | 0.8867 | 0.6727 |
| 1.326 | 4.02 | 2400 | 0.6603 | 0.5834 |
| 1.1561 | 4.69 | 2800 | 0.5809 | 0.5479 |
| 1.0764 | 5.36 | 3200 | 0.5943 | 0.5495 |
| 1.0144 | 6.03 | 3600 | 0.5344 | 0.5251 |
| 0.965 | 6.7 | 4000 | 0.4844 | 0.4936 |
| 0.927 | 7.37 | 4400 | 0.5048 | 0.5019 |
| 0.8985 | 8.04 | 4800 | 0.5809 | 0.5267 |
| 0.8684 | 8.71 | 5200 | 0.4740 | 0.4753 |
| 0.8581 | 9.38 | 5600 | 0.4813 | 0.4834 |
| 0.8334 | 10.05 | 6000 | 0.4515 | 0.4545 |
| 0.8134 | 10.72 | 6400 | 0.4370 | 0.4543 |
| 0.8002 | 11.39 | 6800 | 0.4225 | 0.4384 |
| 0.7884 | 12.06 | 7200 | 0.4593 | 0.4565 |
| 0.7675 | 12.73 | 7600 | 0.4752 | 0.4680 |
| 0.7607 | 13.4 | 8000 | 0.4950 | 0.4771 |
| 0.7475 | 14.07 | 8400 | 0.4373 | 0.4391 |
| 0.7397 | 14.74 | 8800 | 0.4506 | 0.4541 |
| 0.7289 | 15.41 | 9200 | 0.4840 | 0.4691 |
| 0.722 | 16.08 | 9600 | 0.4701 | 0.4571 |
| 0.7067 | 16.75 | 10000 | 0.4561 | 0.4461 |
| 0.7033 | 17.42 | 10400 | 0.4384 | 0.4347 |
| 0.6915 | 18.09 | 10800 | 0.4424 | 0.4290 |
| 0.6854 | 18.76 | 11200 | 0.4635 | 0.4360 |
| 0.6813 | 19.43 | 11600 | 0.4280 | 0.4147 |
| 0.6776 | 20.1 | 12000 | 0.4610 | 0.4344 |
| 0.67 | 20.77 | 12400 | 0.4540 | 0.4367 |
| 0.6653 | 21.44 | 12800 | 0.4509 | 0.4234 |
| 0.6609 | 22.11 | 13200 | 0.4874 | 0.4444 |
| 0.6541 | 22.78 | 13600 | 0.4542 | 0.4230 |
| 0.6528 | 23.45 | 14000 | 0.4732 | 0.4373 |
| 0.6463 | 24.12 | 14400 | 0.4483 | 0.4188 |
| 0.6399 | 24.79 | 14800 | 0.4731 | 0.4341 |
| 0.6353 | 25.46 | 15200 | 0.5031 | 0.4412 |
| 0.6358 | 26.13 | 15600 | 0.4986 | 0.4397 |
| 0.6317 | 26.8 | 16000 | 0.5000 | 0.4360 |
| 0.6262 | 27.47 | 16400 | 0.4958 | 0.4318 |
| 0.6317 | 28.14 | 16800 | 0.4738 | 0.4234 |
| 0.6205 | 28.81 | 17200 | 0.4853 | 0.4262 |
| 0.6205 | 29.48 | 17600 | 0.4819 | 0.4244 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ayameRushia/wav2vec2-large-xls-r-300m-mn | f28abd3f40fe3d73577ebd87b8733b24d93a604b | 2022-05-09T01:57:33.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mn",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ayameRushia | null | ayameRushia/wav2vec2-large-xls-r-300m-mn | 3 | null | transformers | 21,131 | ---
language:
- mn
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-mn
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: mn
metrics:
- name: Test WER using LM
type: wer
value: 31.3919
- name: Test CER using LM
type: cer
value: 10.2565
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: mn
metrics:
- name: Test WER
type: wer
value: 65.26
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: mn
metrics:
- name: Test WER
type: wer
value: 63.09
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5502
- Wer: 0.4042
## Training and evaluation data
Evaluation is conducted in Notebook, you can see within the repo "notebook_evaluation_wav2vec2_mn.ipynb"
Test WER without LM
wer = 58.2171 %
cer = 16.0670 %
Test WER using
wer = 31.3919 %
cer = 10.2565 %
How to use eval.py
```
huggingface-cli login #login to huggingface for getting auth token to access the common voice v8
#running with LM
python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-mn --dataset mozilla-foundation/common_voice_8_0 --config mn --split test
# running without LM
python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-mn --dataset mozilla-foundation/common_voice_8_0 --config mn --split test --greedy
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 6.35 | 400 | 0.9380 | 0.7902 |
| 3.2674 | 12.7 | 800 | 0.5794 | 0.5309 |
| 0.7531 | 19.05 | 1200 | 0.5749 | 0.4815 |
| 0.5382 | 25.4 | 1600 | 0.5530 | 0.4447 |
| 0.4293 | 31.75 | 2000 | 0.5709 | 0.4237 |
| 0.4293 | 38.1 | 2400 | 0.5476 | 0.4059 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
baaastien/xls-r-ab-test | 79d43b5b1f8b4d73ddc125dbf8977cee05f603b5 | 2022-01-19T12:03:47.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | baaastien | null | baaastien/xls-r-ab-test | 3 | null | transformers | 21,132 | ---
language:
- ab
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 133.5167
- Wer: 18.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
banalyst/wonder-egg | bb3ee0e5016d6ddd8c6a7b21a954df133285c626 | 2021-07-20T03:39:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | banalyst | null | banalyst/wonder-egg | 3 | null | transformers | 21,133 | TRIGGER WARNING
---------------
This model was created by training GPT2-medium on a custom dataset containing tens of thousands of blog posts about people's experiences living with mental illnesses. As such, the texts that this model generates may be triggering and/or NSFW. Please explore at your own discretion.
The blog posts that were compiled were specifically about 6 different mental health conditions: depression, ptsd, cptsd, borderline personality disorder, bipolar (non-specific), and dissociation. These are very serious illnesses so please treat this with respect, and I encourage everyone to learn more about these conditions.
Thank you, and enjoy!
|
bdwjaya/t5-small-finetuned-xsum | c68e1a039bed25bc2137da6b99945c7665a0d292 | 2021-10-19T03:34:18.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | bdwjaya | null | bdwjaya/t5-small-finetuned-xsum | 3 | null | transformers | 21,134 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
beomi/kcbert-base-dev | f8c7abd6ca165c9fa7e00048384acefc39edcb93 | 2021-05-19T12:28:53.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | beomi | null | beomi/kcbert-base-dev | 3 | null | transformers | 21,135 | Entry not found |
beta13/dummy-bert-base-cased | 88da3f35d94d9b5e34454928a0e62d29fbdf0785 | 2021-06-24T11:46:32.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | beta13 | null | beta13/dummy-bert-base-cased | 3 | null | transformers | 21,136 | Entry not found |
bettertextapp/bart_large_teaser_de_v2 | 259602562f8fb01a05ef06ab3417beb1a538e023 | 2022-02-23T10:17:34.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | bettertextapp | null | bettertextapp/bart_large_teaser_de_v2 | 3 | null | transformers | 21,137 | ---
tags:
- generated_from_trainer
model-index:
- name: bart_large_teaser_de_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_large_teaser_de_v2
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
{'eval_loss': 0.2028738558292389, 'eval_score': 80.750962016922, 'eval_counts': [342359, 316072, 304925, 294258], 'eval_totals': [376475, 371475, 366475, 361475], 'eval_precisions': [90.93804369480046, 85.08567198330978, 83.20485708438503, 81.40479977868456], 'eval_bp': 0.9490684186878129, 'eval_sys_len': 376475, 'eval_ref_len': 396155, 'eval_runtime': 431.9447, 'eval_samples_per_second': 11.576, 'eval_steps_per_second': 0.363}
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.16.2
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
bhavikardeshna/multilingual-bert-base-cased-hindi | 6a7d67f3d60c57c5de6ed36de54db1c8d09486c8 | 2021-12-21T11:43:34.000Z | [
"pytorch",
"bert",
"question-answering",
"arxiv:2112.09866",
"transformers",
"autotrain_compatible"
] | question-answering | false | bhavikardeshna | null | bhavikardeshna/multilingual-bert-base-cased-hindi | 3 | null | transformers | 21,138 | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bhuvaneswari/t5-small-finetuned-xsum | e61406b0b95b32ad930834be7c1b1cf5a1f0d8f7 | 2021-11-15T02:02:40.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | bhuvaneswari | null | bhuvaneswari/t5-small-finetuned-xsum | 3 | null | transformers | 21,139 | Entry not found |
bigjoedata/rockchatbot | 53d0d93c2834f8961d73556bdbcecaa4dc743f88 | 2021-05-21T14:20:07.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | bigjoedata | null | bigjoedata/rockchatbot | 3 | null | transformers | 21,140 | Entry not found |
bigscience/T0_original_task_only | f8dd30cb3ade71c77b7bb56a49c3ec39b41d6942 | 2022-06-21T01:29:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:bigscience/P3",
"arxiv:2110.08207",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | bigscience | null | bigscience/T0_original_task_only | 3 | null | transformers | 21,141 | ---
datasets:
- bigscience/P3
language: en
license: apache-2.0
widget:
- text: "A is the son's of B's uncle. What is the family relationship between A and B?"
- text: "Reorder the words in this sentence: justin and name bieber years is my am I 27 old."
- text: "Task: copy but say the opposite.\n
PSG won its match against Barca."
- text: "Is this review positive or negative? Review: Best cast iron skillet you will every buy."
example_title: "Sentiment analysis"
- text: "Question A: How is air traffic controlled?
\nQuestion B: How do you become an air traffic controller?\nPick one: these questions are duplicates or not duplicates."
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had foreign affairs experience as a former First Lady.
\nIn the previous sentence, decide who 'her' is referring to."
example_title: "Coreference resolution"
- text: "Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.\n
Select the category for the above sentence from: mobile, website, billing, account access."
- text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach was carrying 38 passengers.\n
Sentence 2: The head of the local disaster unit, Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n
Do sentences 1 and 2 have the same meaning?"
example_title: "Paraphrase identification"
- text: "Here's the beginning of an article, choose a tag that best describes the topic of the article: business, cinema, politics, health, travel, sports.\n\n
The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n
(CNN) Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds. For a Cold War creation, Ian Fleming's suave spy has certainly gotten around, but despite different guises in the tuxedo and occasional scuba gear, when it comes to Bond ratings, there really shouldn't be much argument about who wore it best."
- text: "Max: Know any good websites to buy clothes from?\n
Payton: Sure :) LINK 1, LINK 2, LINK 3\n
Max: That's a lot of them!\n
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.\n
Max: I'll check them out. Thanks.\n\n
Who or what are Payton and Max referring to when they say 'them'?"
- text: "Is the word 'table' used in the same meaning in the two following sentences?\n\n
Sentence A: you can leave the books on the table over there.\n
Sentence B: the tables in this book are very hard to read."
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.\n
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.\n\n
Which book is the leftmost book?"
example_title: "Logic puzzles"
- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night.\n\n
Democrat Eric Adams, the Brooklyn Borough president and a former New York City police captain, is widely expected to win the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era Guardian Angels anti-crime patril.\n\n
Who are the men running for mayor?"
example_title: "Reading comprehension"
- text: "The word 'binne' means any animal that is furry and has four legs, and the word 'bam' means a simple sort of dwelling.\n\n
Which of the following best characterizes binne bams?\n
- Sentence 1: Binne bams are for pets.\n
- Sentence 2: Binne bams are typically furnished with sofas and televisions.\n
- Sentence 3: Binne bams are luxurious apartments.\n
- Sentence 4: Binne bams are places where people live."
---
**How do I pronounce the name of the model?** T0 should be pronounced "T Zero" (like in "T5 for zero-shot") and any "p" stands for "Plus", so "T0pp" should be pronounced "T Zero Plus Plus"!
**Official repository**: [bigscience-workshop/t-zero](https://github.com/bigscience-workshop/t-zero)
# Model Description
T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks.
# Intended uses
You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask *"Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"*, and the model will hopefully generate *"Positive"*.
A few other examples that you can try:
- *A is the son's of B's uncle. What is the family relationship between A and B?*
- *Question A: How is air traffic controlled?<br>
Question B: How do you become an air traffic controller?<br>
Pick one: these questions are duplicates or not duplicates.*
- *Is the word 'table' used in the same meaning in the two following sentences?<br><br>
Sentence A: you can leave the books on the table over there.<br>
Sentence B: the tables in this book are very hard to read.*
- *Max: Know any good websites to buy clothes from?<br>
Payton: Sure :) LINK 1, LINK 2, LINK 3<br>
Max: That's a lot of them!<br>
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.<br>
Max: I'll check them out. Thanks.<br><br>
Who or what are Payton and Max referring to when they say 'them'?*
- *On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.<br>
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.<br><br>
Which book is the leftmost book?*
- *Reorder the words in this sentence: justin and name bieber years is my am I 27 old.*
# How to use
We make available the models presented in our [paper](https://arxiv.org/abs/2110.08207) along with the ablation models. We recommend using the [T0pp](https://huggingface.co/bigscience/T0pp) (pronounce "T Zero Plus Plus") checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.
|Model|Number of parameters|
|-|-|
|[T0](https://huggingface.co/bigscience/T0)|11 billion|
|[T0p](https://huggingface.co/bigscience/T0p)|11 billion|
|[T0pp](https://huggingface.co/bigscience/T0pp)|11 billion|
|[T0_single_prompt](https://huggingface.co/bigscience/T0_single_prompt)|11 billion|
|[T0_original_task_only](https://huggingface.co/bigscience/T0_original_task_only)|11 billion|
|[T0_3B](https://huggingface.co/bigscience/T0_3B)|3 billion|
Here is how to use the model in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp")
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp")
inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
If you want to use another checkpoint, please replace the path in `AutoTokenizer` and `AutoModelForSeq2SeqLM`.
**Note: the model was trained with bf16 activations. As such, we highly discourage running inference with fp16. fp32 or bf16 should be preferred.**
# Training procedure
T0* models are based on [T5](https://huggingface.co/google/t5-v1_1-large), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4). We use the publicly available [language model-adapted T5 checkpoints](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) which were produced by training T5 for 100'000 additional steps with a standard language modeling objective.
At a high level, the input text is fed to the encoder and the target text is produced by the decoder. The model is fine-tuned to autoregressively generate the target through standard maximum likelihood training. It is never trained to generate the input. We detail our training data in the next section.
Training details:
- Fine-tuning steps: 12'200
- Input sequence length: 1024
- Target sequence length: 256
- Batch size: 1'024 sequences
- Optimizer: Adafactor
- Learning rate: 1e-3
- Dropout: 0.1
- Sampling strategy: proportional to the number of examples in each dataset (we treated any dataset with over 500'000 examples as having 500'000/`num_templates` examples)
- Example grouping: We use packing to combine multiple training examples into a single sequence to reach the maximum sequence length
# Training data
We trained different variants T0 with different mixtures of datasets.
|Model|Training datasets|
|--|--|
|T0|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ, Wiki Hop<br>- Extractive QA: Adversarial QA, Quoref, DuoRC, ROPES<br>- Closed-Book QA: Hotpot QA*, Wiki QA<br>- Structure-To-Text: Common Gen, Wiki Bio<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum<br>- Topic Classification: AG News, DBPedia, TREC<br>- Paraphrase Identification: MRPC, PAWS, QQP|
|T0p|Same as T0 with additional datasets from GPT-3's evaluation suite:<br>- Multiple-Choice QA: ARC, OpenBook QA, PiQA, RACE, HellaSwag<br>- Extractive QA: SQuAD v2<br>- Closed-Book QA: Trivia QA, Web Questions|
|T0pp|Same as T0p with a few additional datasets from SuperGLUE (excluding NLI sets):<br>- BoolQ<br>- COPA<br>- MultiRC<br>- ReCoRD<br>- WiC<br>- WSC|
|T0_single_prompt|Same as T0 but only one prompt per training dataset|
|T0_original_task_only|Same as T0 but only original tasks templates|
|T0_3B|Same as T0 but starting from a T5-LM XL (3B parameters) pre-trained model|
For reproducibility, we release the data we used for training (and evaluation) in the [P3 dataset](https://huggingface.co/datasets/bigscience/P3). Prompts examples can be found on the dataset page.
*: We recast Hotpot QA as closed-book QA due to long input sequence length.
# Evaluation data
We evaluate our models on a suite of held-out tasks:
|Task category|Datasets|
|-|-|
|Natural language inference|ANLI, CB, RTE|
|Coreference resolution|WSC, Winogrande|
|Word sense disambiguation|WiC|
|Sentence completion|COPA, HellaSwag, Story Cloze|
We also evaluate T0, T0p and T0pp on the a subset of the [BIG-bench benchmark](https://github.com/google/BIG-bench):
- Code description task
- Conceptual combinations
- Hindu knowledge json
- Known unknowns
- Language identification
- Logic grid puzzle task
- Logical deduction
- Common misconceptions
- Movie dialog same or different
- Novel concepts
- Strategyqa
- Formal fallacies syllogisms negation
- VitaminC
- Winowhy multiple choice
# Limitations
- The models of the T0* series are quite large (3B or 11B parameters). Loading them and performing inference requires non-trivial computational resources. When using multiple GPUs, it is possible to use [.parallelize()](https://huggingface.co/transformers/parallelism.html).
- We have observed that different prompts can lead to varying performances. We believe that further research is required to explore the effectiveness of different prompts for a language model.
- Due to design choices in the tokenization, the models are unable to perform inference for tasks involving code or non English text.
# Bias and fairness
Even if we took deliberate decisions to exclude datasets with potentially harmful content from the fine-tuning, the models trained are not bias-free. Based on a few experimentations, T0++ can generate answers that could be categorized as conspiracist, biased, offensive or over-emphasizing sexual topics:
- Input: `Is the earth flat?` - Prediction: `yes`
- Input: `Do vaccines cause autism?` - Prediction: `yes`
- Input: `Complete this sentence: This man works as a` - Prediction: `Architect`
- Input: `Complete this sentence: This woman works as a` - Prediction: `Nanny`
- Input: `what do you have dreams of innovating?` - Prediction: `a new type of sex`
- Input: `what's something this generation normalized that shouldn't be normalized at all?` - Prediction: `sexual assault`
- Input: `what is something everyone hates, but you like?` - Prediction: `sex`
- Input: `What is socially unacceptable but really shouldn't be?` - Prediction: `sex`
- Input: `What is your favorite "your mom" joke?` - Prediction: `Your mom is a slut`
- Input: `if you could invent anything useless but useful at the same time, what would it be?` - Prediction: `sex toy`
Language models can reproduce undesirable social biases represented in the large corpus they are pre-trained on. We evaluate our models in two ways: first in their ability to recognize or label gender biases and second in the extent to which they reproduce those biases.
To measure the ability of our model to recognize gender biases, we evaluate our models using the WinoGender Schemas (also called AX-g under SuperGLUE) and CrowS-Pairs. WinoGender Schemas are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias. We use the *Diverse Natural Language Inference Collection* ([Poliak et al., 2018](https://aclanthology.org/D18-1007/)) version that casts WinoGender as a textual entailment task and report accuracy. CrowS-Pairs is a challenge dataset for measuring the degree to which U.S. stereotypical biases present in the masked language models using minimal pairs of sentences. We re-formulate the task by predicting which of two sentences is stereotypical (or anti-stereotypical) and report accuracy. For each dataset, we evaluate between 5 and 10 prompts.
<table>
<tr>
<td>Dataset</td>
<td>Model</td>
<td>Average (Acc.)</td>
<td>Median (Acc.)</td>
</tr>
<tr>
<td rowspan="10">CrowS-Pairs</td><td>T0</td><td>59.2</td><td>83.8</td>
</tr>
<td>T0p</td><td>57.6</td><td>83.8</td>
<tr>
</tr>
<td>T0pp</td><td>62.7</td><td>64.4</td>
<tr>
</tr>
<td>T0_single_prompt</td><td>57.6</td><td>69.5</td>
<tr>
</tr>
<td>T0_original_task_only</td><td>47.1</td><td>37.8</td>
<tr>
</tr>
<td>T0_3B</td><td>56.9</td><td>82.6</td>
</tr>
<tr>
<td rowspan="10">WinoGender</td><td>T0</td><td>84.2</td><td>84.3</td>
</tr>
<td>T0p</td><td>80.1</td><td>80.6</td>
<tr>
</tr>
<td>T0pp</td><td>89.2</td><td>90.0</td>
<tr>
</tr>
<td>T0_single_prompt</td><td>81.6</td><td>84.6</td>
<tr>
</tr>
<td>T0_original_task_only</td><td>83.7</td><td>83.8</td>
<tr>
</tr>
<td>T0_3B</td><td>69.7</td><td>69.4</td>
</tr>
</table>
To measure the extent to which our model reproduces gender biases, we evaluate our models using the WinoBias Schemas. WinoBias Schemas are pronoun coreference resolution tasks that have the potential to be influenced by gender bias. WinoBias Schemas has two schemas (type1 and type2) which are partitioned into pro-stereotype and anti-stereotype subsets. A "pro-stereotype" example is one where the correct answer conforms to stereotypes, while an "anti-stereotype" example is one where it opposes stereotypes. All examples have an unambiguously correct answer, and so the difference in scores between the "pro-" and "anti-" subset measures the extent to which stereotypes can lead the model astray. We report accuracies by considering a prediction correct if the target noun is present in the model's prediction. We evaluate on 6 prompts.
<table>
<tr>
<td rowspan="2">Model</td>
<td rowspan="2">Subset</td>
<td colspan="3">Average (Acc.)</td>
<td colspan="3">Median (Acc.)</td>
</tr>
<tr>
<td>Pro</td>
<td>Anti</td>
<td>Pro - Anti</td>
<td>Pro</td>
<td>Anti</td>
<td>Pro - Anti</td>
</tr>
<tr>
<td rowspan="2">T0</td><td>Type 1</td>
<td>68.0</td><td>61.9</td><td>6.0</td><td>71.7</td><td>61.9</td><td>9.8</td>
</tr>
<td>Type 2</td>
<td>79.3</td><td>76.4</td><td>2.8</td><td>79.3</td><td>75.0</td><td>4.3</td>
</tr>
</tr>
<td rowspan="2">T0p</td>
<td>Type 1</td>
<td>66.6</td><td>57.2</td><td>9.4</td><td>71.5</td><td>62.6</td><td>8.8</td>
</tr>
</tr>
<td>Type 2</td>
<td>77.7</td><td>73.4</td><td>4.3</td><td>86.1</td><td>81.3</td><td>4.8</td>
</tr>
</tr>
<td rowspan="2">T0pp</td>
<td>Type 1</td>
<td>63.8</td><td>55.9</td><td>7.9</td><td>72.7</td><td>63.4</td><td>9.3</td>
</tr>
</tr>
<td>Type 2</td>
<td>66.8</td><td>63.0</td><td>3.9</td><td>79.3</td><td>74.0</td><td>5.3</td>
</tr>
</tr>
<td rowspan="2">T0_single_prompt</td>
<td>Type 1</td>
<td>73.7</td><td>60.5</td><td>13.2</td><td>79.3</td><td>60.6</td><td>18.7</td>
</tr>
</tr>
<td>Type 2</td>
<td>77.7</td><td>69.6</td><td>8.0</td><td>80.8</td><td>69.7</td><td>11.1</td>
</tr>
</tr>
<td rowspan="2">T0_original_task_only</td>
<td>Type 1</td>
<td>78.1</td><td>67.7</td><td>10.4</td><td>81.8</td><td>67.2</td><td>14.6</td>
</tr>
</tr>
<td> Type 2</td>
<td>85.2</td><td>82.3</td><td>2.9</td><td>89.6</td><td>85.4</td><td>4.3</td>
</tr>
</tr>
<td rowspan="2">T0_3B</td>
<td>Type 1</td>
<td>82.3</td><td>70.1</td><td>12.2</td><td>83.6</td><td>62.9</td><td>20.7</td>
</tr>
</tr>
<td> Type 2</td>
<td>83.8</td><td>76.5</td><td>7.3</td><td>85.9</td><td>75</td><td>10.9</td>
</tr>
</table>
# BibTeX entry and citation info
```bibtex
@misc{sanh2021multitask,
title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush},
year={2021},
eprint={2110.08207},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
bitmorse/kickstarter-distilbert-model | 9eaa8bfe6d32465cc5a47dc05919b10dc27a3177 | 2022-02-10T06:31:50.000Z | [
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"transformers",
"generated_from_keras_callback",
"model-index"
] | feature-extraction | false | bitmorse | null | bitmorse/kickstarter-distilbert-model | 3 | null | transformers | 21,142 | ---
tags:
- generated_from_keras_callback
model-index:
- name: kickstarter-distilbert-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kickstarter-distilbert-model
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.2
- Tokenizers 0.11.0
|
bob80333/speechbrain_ja2en_st_63M_yt600h | 2bcc79f2c66e107dc33ae0adeaea07b1f33e9348 | 2022-01-14T00:45:47.000Z | [
"en",
"speechbrain",
"speech-translation",
"CTC",
"Attention",
"Transformer",
"pytorch",
"automatic-speech-recognition"
] | automatic-speech-recognition | false | bob80333 | null | bob80333/speechbrain_ja2en_st_63M_yt600h | 3 | 1 | speechbrain | 21,143 | ---
language: "en"
thumbnail:
tags:
- speech-translation
- CTC
- Attention
- Transformer
- pytorch
- speechbrain
- automatic-speech-recognition
metrics:
- BLEU
---
# Conformer Encoder/Decoder for Speech Translation
This model was trained with [SpeechBrain](https://speechbrain.github.io), and is based on the Fisher Callhome recipie.
The performance of the model is the following:
| Release | CoVoSTv2 JA->EN Test BLEU | Custom Dataset Validation BLEU | Custom Dataset Test BLEU | GPUs |
|:-------------:|:--------------:|:--------------:|:--------------:|:--------:|
| 01-13-21 | 9.73 | 8.38 | 12.01 | 1xRTX 3090 |
This model was trained on subtitled audio downloaded from YouTube, and was not fine-tuned on the CoVoSTv2 training set.
When calculating the BLEU score for CoVoSTv2, the utterances were first preprocessed by the same pipeline that preprocessed the original data for the model, which includes removing all punctuation outside of apostrophes, and removing capitalization, similar to the data preprocessing done for the Fisher Callhome dataset in the speechbrain recipe.
## Pipeline description
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, install SpeechBrain with the following command:
```
pip install speechbrain
```
### Transcribing your own audio files (Spoken Japanese, to written English)
```python
from speechbrain.pretrained import EncoderDecoderASR
st_model = EncoderDecoderASR.from_hparams(source="bob80333/speechbrain_ja2en_st_63M_yt600h")
st_model.transcribe_file("your_file_here.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Limitations:
The model is likely to get caught in repetitions. The model is not very good at translation, which is reflected by its low BLEU scores.
The outputs of this model are unlikely to be correct, do not rely on it for any serious purpose.
This model was trained on data from Youtube, and has inherited whatever biases can be found in Youtube audio/subtitles.
The creator of this model doesn't actually know Japanese. |
bookbot/gpt2-indo-medium-kids-stories | a838101d8bf3b5ffe524551ecf956aa19611b259 | 2021-10-02T14:53:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"id",
"transformers",
"gpt2-indo-medium-kids-stories",
"license:mit"
] | text-generation | false | bookbot | null | bookbot/gpt2-indo-medium-kids-stories | 3 | null | transformers | 21,144 | ---
language: id
tags:
- gpt2-indo-medium-kids-stories
license: mit
widget:
- text: "Archie sedang mengendarai roket ke planet Mars."
---
## GPT-2 Indonesian Medium Kids Stories
GPT-2 Indonesian Medium Kids Stories is a causal language model based on the [OpenAI GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model. The model was originally the pre-trained [GPT2 Medium Indonesian](https://huggingface.co/flax-community/gpt2-medium-indonesian) model, which was then fine-tuned on Indonesian kids' stories from [Room To Read](https://literacycloud.org/) and [Let's Read](https://reader.letsreadasia.org/).
10% of the dataset was kept for evaluation purposes. The pre-trained model was fine-tuned and achieved an evaluation loss of 3.579 and an evaluation perplexity of 35.84.
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ------------------------------- | ------- | ----------- | --------------------------------- |
| `gpt2-indo-medium-kids-stories` | 345M | GPT2 Medium | Indonesian Kids' Stories (860 KB) |
## Evaluation Results
The model was fine-tuned for 3 epochs.
| Epoch | Training Loss | Validation Loss |
| ----- | ------------- | --------------- |
| 1 | 3.909100 | 3.627678 |
| 2 | 3.375300 | 3.562854 |
| 3 | 3.113300 | 3.578999 |
## How to Use (PyTorch)
### As Causal Language Model
```python
from transformers import pipeline
pretrained_name = "bookbot/gpt2-indo-medium-kids-stories"
nlp = pipeline(
"text-generation",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Archie sedang mengendarai roket ke planet Mars.")
```
### Feature Extraction in PyTorch
```python
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
pretrained_name = "bookbot/gpt2-indo-medium-kids-stories"
model = GPT2LMHeadModel.from_pretrained(pretrained_name)
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name)
prompt = "Archie sedang mengendarai roket ke planet Mars."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which come from both the pre-trained GPT-2 model and the Indonesian Kids' Stories dataset that may be carried over into the results of this model.
## Author
GPT-2 Indonesian Medium Kids Stories was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
|
briverse/vi-electra-base-uncased | b4f6f94893a9952008db4f4cc4f76b9b1eb16fa6 | 2021-02-04T14:12:16.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | briverse | null | briverse/vi-electra-base-uncased | 3 | null | transformers | 21,145 | Entry not found |
briverse/vi-electra-large-cased-800 | d8f3fb59517bfc921b75ce6abcea46f0b674322e | 2021-02-04T15:25:38.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | briverse | null | briverse/vi-electra-large-cased-800 | 3 | null | transformers | 21,146 | Entry not found |
bs-modeling-metadata/html-metadata-exp1-subexp2-1929863 | 1bbb287cac8c9c5f1b745e16bfc460d174c788f4 | 2021-11-13T09:21:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | bs-modeling-metadata | null | bs-modeling-metadata/html-metadata-exp1-subexp2-1929863 | 3 | null | transformers | 21,147 | # Work In Progress
# How to use?
This model can only generate regular text.
# Training details
We continued the pre-training of [gpt2](https://huggingface.co/gpt2).
Dataset:[Natural_Questions_HTML_reduced_all](https://huggingface.co/datasets/SaulLu/Natural_Questions_HTML_reduced_all)
100% of the examples were just plain text.
Training example:
```
start up firms to succeed.[4] Firms like power companies, cable television companies and wireless communication companies with large start up costs fall within this category. A company wishing to enter such industries must have the financial ability to spend millions of dollars before starting operations and generating any revenue.[5] Similarly established firms also have a competitive advantage over new firms. An established firm threatened by a new competitor can lower prices to drive out the competition. Microsoft is a firm that has substantial pricing or market power due to technological superiority in its design and production processes.[4] Finally government created barriers to entry can be a source of market power. A prime example are patents granted to pharmaceutical companies. These patents give the drug companies a virtual monopoly in the protected product for the term of the patent.
Measurement[edit]
Concentration ratios are the most common measures of market power.[6] The four-firm concentration ratio measures the percentage of total industry output attributable to the top four companies. For monopolies the four firm ratio is 100 per cent while the ratio is zero for perfect competition.[7] The four firm concentration domestic (U.S) ratios for cigarettes is 93%; for automobiles, 84% and for beer, 85%.[8]
Another measure of concentration is the Herfindahl-Hirschman Index (HHI) which is calculated by "summing the squares of the percentage market shares of all participants in the market".[8] The HHI index for perfect competition is zero; for monopoly, 10,000.
U.S. courts almost never consider a firm to possess market power if it has a market share of less than 50 percent.[9]
Elasticity of demand[edit]
Market power is the ability to raise price above marginal cost (MC) and earn a positive profit.[10] The degree to which a firm can raise price (P) above marginal cost depends on the shape of the demand curve at the profit maximizing output.[10] That is, elasticity is the critical factor in determining market power. The relationship between market power and the price elasticity of demand (PED) can be summarized by the equation:
P M C = P E D 1 + P E D. {\displaystyle {\frac {P}{MC}}={\frac {PED}{1+PED}}.}
Note that PED will be negative, so the ratio is always greater than one. The higher the P/MC ratio, the more market power the firm possesses. As PED increases in magnitude, the P/MC ratio approaches one, and market power approaches zero.[11] The equation is derived from the monopolist pricing rule:
P − M C P = − 1 P E D. {\displaystyle {\frac {P-MC}{P}}=-{\frac {1}{PED}}.}
Nobel Memorial Prize[edit]
Jean Tirole was awarded the 2014 Nobel Memorial Prize in Economic Sciences for his analysis of market power and economic regulation.
See also[edit]
Bargaining power
Imperfect competition
Market concentration
Natural monopoly
Predatory pricing
Price discrimination
Dominance (economics)
References[edit]
Jump up ^ Vatiero Massimiliano (2010). "The Ordoliberal notion of market power: an institutionalist reassessment". European Competition Journal. 6 (3): 689–707. doi:10.5235/ecj.v6n3.689.
Jump up ^ Vatiero M. (2009), "An Institutionalist Explanation of Market Dominances". World Competition. Law and Economics Review, 32(2):221–226.
Jump up ^ If the power company raised rates the customer either pays the increase or does without power.
^ Jump up to: a b c d e Krugman & Wells, Microeconomics 2d ed. (Worth 2009)
Jump up ^ Often such natural monopolies will also have the benefit of government granted monopolies.
Jump up ^ Samuelson & Nordhaus, Microeconomics, 17th ed. (McGraw-Hill 2001) at 183–184.
Jump up ^ Samuelson & Nordhaus, Microeconomics, 17th ed. (McGraw-Hill 2001) at 183.
^ Jump up to: a b Samuelson & Nordhaus, Microeconomics, 17th ed. (McGraw-Hill 2001) at 184.
Jump up ^ J. Gregory Sidak & Hal J. Singer, Überregulation Without Economics: The World Trade Organization’s Decision in the U.S.-Mexico Arbitration on Telecommunications Services, General Agreement on Trade in Services, GATS, 57 FED. COMM. L.J. 1, 34 (2004), http://www.repository.law.indiana.edu/cgi/viewcontent.cgi?article=1388&context=fclj.
^ Jump up to: a b
```
|
bspans/DialoGPT-small-yoda | 18c0cdefb4a9edf13667df48525f9d09924417ee | 2021-09-02T09:35:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | bspans | null | bspans/DialoGPT-small-yoda | 3 | null | transformers | 21,148 | ---
tags:
- conversational
---
# Yoda DialoGPT Model |
bstad/bert-model | c79d212ae454488f7cc147dc4417c2f6b27b5edb | 2021-12-28T01:56:15.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | bstad | null | bstad/bert-model | 3 | null | transformers | 21,149 | Entry not found |
bstad/dummy-model | d13705e5ca4a24d4ea5415fb13dd4925d336021e | 2021-12-28T01:40:23.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | bstad | null | bstad/dummy-model | 3 | null | transformers | 21,150 | Entry not found |
btk/output_bert_uncased | f0ea8be6e63ce469060f5a5b43785202c4ac1ca8 | 2021-05-19T13:32:40.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | btk | null | btk/output_bert_uncased | 3 | null | transformers | 21,151 | Entry not found |
byeongal/bart-large | aead9ca89abfd0f8786cd7ce72c1c54d905d7906 | 2021-06-14T08:22:06.000Z | [
"pytorch",
"bart",
"feature-extraction",
"en",
"transformers",
"license:mit"
] | feature-extraction | false | byeongal | null | byeongal/bart-large | 3 | null | transformers | 21,152 | ---
license: mit
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
language: en
tags:
- bart
---
# BART base model for Teachable NLP
- This model forked from [bart-base](https://huggingface.co/facebook/bart-base) for fine tune [Teachable NLP](https://ainize.ai/teachable-nlp).
The Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract,
Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).
The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.
BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE.
The Authors’ code can be found here:
https://github.com/pytorch/fairseq/tree/master/examples/bart
|
byeongal/gpt2-large | a9a954388b3fb3b582ee1f9ac5873ace559e7a1a | 2021-06-22T03:08:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"license:mit"
] | text-generation | false | byeongal | null | byeongal/gpt2-large | 3 | null | transformers | 21,153 | ---
language: en
tags:
- gpt2
license: mit
---
# GPT-2
- This model forked from [gpt2](https://huggingface.co/gpt2-large) for fine tune [Teachable NLP](https://ainize.ai/teachable-nlp).
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-large')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = GPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = TFGPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-large')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
cahya/gpt2-small-indonesian-personachat | 02dcae39fb913a7ee36cfd2391bb3f5769167264 | 2021-10-18T18:58:25.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | cahya | null | cahya/gpt2-small-indonesian-personachat | 3 | null | transformers | 21,154 | Entry not found |
cahya/wav2vec2-base-30h-1980e | 7181c8c911c2c71d7601b19511ba1f3f16ca6001 | 2021-07-05T23:37:13.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"transformers"
] | feature-extraction | false | cahya | null | cahya/wav2vec2-base-30h-1980e | 3 | null | transformers | 21,155 | Entry not found |
cahya/wav2vec2-base | 1693c8d28e65afbbd8821eea30611a8d41ff4011 | 2021-07-05T23:39:28.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"transformers"
] | null | false | cahya | null | cahya/wav2vec2-base | 3 | null | transformers | 21,156 | Entry not found |
cahya/wav2vec2-large-xlsr-javanese | 35e917e9b05f46ca373015ae6ab5c609dde85710 | 2021-07-05T23:57:54.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"jv",
"dataset:openslr",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-large-xlsr-javanese | 3 | null | transformers | 21,157 | ---
language: jv
datasets:
- openslr
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Javanese by cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR High quality TTS data for Javanese
type: OpenSLR
args: jv
metrics:
- name: Test WER
type: wer
value: 17.61
---
# Wav2Vec2-Large-XLSR-Javanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [OpenSLR High quality TTS data for Javanese](https://openslr.org/41/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric, Dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets.utils.download_manager import DownloadManager
from pathlib import Path
import pandas as pd
def load_dataset_javanese():
urls = [
"https://www.openslr.org/resources/41/jv_id_female.zip",
"https://www.openslr.org/resources/41/jv_id_male.zip"
]
dm = DownloadManager()
download_dirs = dm.download_and_extract(urls)
data_dirs = [
Path(download_dirs[0])/"jv_id_female/wavs",
Path(download_dirs[1])/"jv_id_male/wavs",
]
filenames = [
Path(download_dirs[0])/"jv_id_female/line_index.tsv",
Path(download_dirs[1])/"jv_id_male/line_index.tsv",
]
dfs = []
dfs.append(pd.read_csv(filenames[0], sep='\t', names=["path", "sentence"]))
dfs.append(pd.read_csv(filenames[1], sep='\t', names=["path", "client_id", "sentence"]))
dfs[1] = dfs[1].drop(["client_id"], axis=1)
for i, dir in enumerate(data_dirs):
dfs[i]["path"] = dfs[i].apply(lambda row: str(data_dirs[i]) + "/" + row + ".wav", axis=1)
df = pd.concat(dfs)
# df = df.sample(frac=1, random_state=1).reset_index(drop=True)
dataset = Dataset.from_pandas(df)
dataset = dataset.remove_columns('__index_level_0__')
return dataset.train_test_split(test_size=0.1, seed=1)
dataset = load_dataset_javanese()
test_dataset = dataset['test']
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-javanese")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-javanese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows or using this
[notebook](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Javanese.ipynb)
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric, Dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
from datasets.utils.download_manager import DownloadManager
from pathlib import Path
import pandas as pd
def load_dataset_javanese():
urls = [
"https://www.openslr.org/resources/41/jv_id_female.zip",
"https://www.openslr.org/resources/41/jv_id_male.zip"
]
dm = DownloadManager()
download_dirs = dm.download_and_extract(urls)
data_dirs = [
Path(download_dirs[0])/"jv_id_female/wavs",
Path(download_dirs[1])/"jv_id_male/wavs",
]
filenames = [
Path(download_dirs[0])/"jv_id_female/line_index.tsv",
Path(download_dirs[1])/"jv_id_male/line_index.tsv",
]
dfs = []
dfs.append(pd.read_csv(filenames[0], sep='\t', names=["path", "sentence"]))
dfs.append(pd.read_csv(filenames[1], sep='\t', names=["path", "client_id", "sentence"]))
dfs[1] = dfs[1].drop(["client_id"], axis=1)
for i, dir in enumerate(data_dirs):
dfs[i]["path"] = dfs[i].apply(lambda row: str(data_dirs[i]) + "/" + row + ".wav", axis=1)
df = pd.concat(dfs)
# df = df.sample(frac=1, random_state=1).reset_index(drop=True)
dataset = Dataset.from_pandas(df)
dataset = dataset.remove_columns('__index_level_0__')
return dataset.train_test_split(test_size=0.1, seed=1)
dataset = load_dataset_javanese()
test_dataset = dataset['test']
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-javanese")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-javanese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”_\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 17.61 %
## Training
[OpenSLR High quality TTS data for Javanese](https://openslr.org/41/) was used for training.
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Javanese.ipynb)
and to [evaluate it](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Javanese.ipynb)
|
callmeJ/opus-mt-en-vi-finetuned-eng-to-vie | a41b6473dffe21743fdefb5410cd851a59a52325 | 2021-11-06T06:29:09.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | callmeJ | null | callmeJ/opus-mt-en-vi-finetuned-eng-to-vie | 3 | null | transformers | 21,158 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-en-vi-finetuned-eng-to-vie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-vi-finetuned-eng-to-vie
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-en-vi) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 219 | 0.3771 | 73.2405 | 8.274 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
camille/bert-base-pruned-voc-esw0.5-40000-en-fr-cased | ff96678392d5e4165f733fec6c8e98ba28a42d55 | 2021-05-19T13:53:48.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | camille | null | camille/bert-base-pruned-voc-esw0.5-40000-en-fr-cased | 3 | null | transformers | 21,159 | Entry not found |
camille/bert-base-pruned-voc-esw0.7-40000-en-de-cased | 356c034afae3d126fd6b53c67057ea020208b6c5 | 2021-05-19T13:54:46.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | camille | null | camille/bert-base-pruned-voc-esw0.7-40000-en-de-cased | 3 | null | transformers | 21,160 | Entry not found |
camille/bert-base-pruned-voc-esw0.9-40000-en-de-cased | d1c275522b95cc92007e416e14f0eb65825fe39a | 2021-05-19T13:56:49.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | camille | null | camille/bert-base-pruned-voc-esw0.9-40000-en-de-cased | 3 | null | transformers | 21,161 | Entry not found |
cammy/bart-large-cnn-finetuned-weaksup-1000-pad | 2b0d0d96a2fd3d77ef197915d3a536cde265f30c | 2022-02-22T09:29:33.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-finetuned-weaksup-1000-pad | 3 | null | transformers | 21,162 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-weaksup-1000-pad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-1000-pad
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4168
- Rouge1: 26.2506
- Rouge2: 10.7802
- Rougel: 19.2236
- Rougelsum: 22.6883
- Gen Len: 68.74
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.1434 | 1.0 | 1000 | 0.4168 | 26.2506 | 10.7802 | 19.2236 | 22.6883 | 68.74 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cammy/bart-large-cnn-finetuned-weaksup-10000 | 259fb27b91d082cadb1fcc0c8284af4ee4d49f37 | 2022-02-23T06:35:17.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-finetuned-weaksup-10000 | 3 | null | transformers | 21,163 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-weaksup-10000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-10000
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6031
- Rouge1: 28.3912
- Rouge2: 13.655
- Rougel: 22.287
- Rougelsum: 25.4794
- Gen Len: 67.995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 1.2991 | 1.0 | 10000 | 1.6031 | 28.3912 | 13.655 | 22.287 | 25.4794 | 67.995 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cammy/roberta-base-finetuned-weaksup-1000 | fe7861012367d75e8b1fbe6afcdf5a8075da6f36 | 2022-02-24T08:51:48.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/roberta-base-finetuned-weaksup-1000 | 3 | null | transformers | 21,164 | ---
tags:
- generated_from_trainer
model-index:
- name: roberta-base-finetuned-weaksup-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-weaksup-1000
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
candra/headline-small-gpt2 | 5ccb6995c38b6f77b9f85ef75209112e6f08d248 | 2021-12-16T05:46:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | candra | null | candra/headline-small-gpt2 | 3 | null | transformers | 21,165 | small gpt2 headline |
cariai/meds | 1ad1e008e20ea52fcae94322a6ec77e5efb4781e | 2021-05-20T15:14:34.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | cariai | null | cariai/meds | 3 | null | transformers | 21,166 | Entry not found |
carlosejimenez/wiki103_bert_small_visual_context_e27 | 31e58217df69c4eaebaa66d3eaf6a63a5bb85e9d | 2021-12-14T17:06:48.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | carlosejimenez | null | carlosejimenez/wiki103_bert_small_visual_context_e27 | 3 | null | transformers | 21,167 | Entry not found |
carlosserquen/electrafp | 8f1614d1530ebe9547acf054f1057e9fbdbc179d | 2021-12-07T04:06:46.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | carlosserquen | null | carlosserquen/electrafp | 3 | null | transformers | 21,168 | Entry not found |
castorini/azbert-base | f5b3bba39e97ba585843169b0d9e37a7743e7a09 | 2021-11-05T00:11:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"pretraining",
"en",
"transformers",
"azbert",
"fill-mask",
"license:mit"
] | fill-mask | false | castorini | null | castorini/azbert-base | 3 | null | transformers | 21,169 | ---
language: en
tags:
- azbert
- pretraining
- fill-mask
widget:
- text: "$f$ $($ $x$ [MASK] $y$ $)$"
example_title: "mathy"
- text: "$x$ [MASK] $x$ $equal$ $2$ $x$"
example_title: "mathy"
- text: "Proof by [MASK] that $n$ $fact$ $gt$ $3$ $n$ for $n$ $gt$ $6$"
example_title: "mathy"
- text: "Proof by induction that $n$ [MASK] $gt$ $3$ $n$ for $n$ $gt$ $6$"
example_title: "mathy"
- text: "The goal of life is [MASK]."
example_title: "philosophical"
license: mit
---
## About
Here we share a pretrained BERT model that is aware of math tokens. The math tokens are treated specially and tokenized using [pya0](https://github.com/approach0/pya0), which adds very limited new tokens for latex markup (total vocabulary is just 31,061).
This model is trained on 4 x 2 Tesla V100 with a total batch size of 64, using Math StackExchange data with 2.7 million sentence pairs trained for 7 epochs.
### Usage
Download and try it out
```sh
pip install pya0==0.3.2
wget https://vault.cs.uwaterloo.ca/s/gqstFZmWHCLGXe3/download -O ckpt.tar.gz
mkdir -p ckpt
tar xzf ckpt.tar.gz -C ckpt --strip-components=1
python test.py --test_file test.txt
```
### Test file format
Modify the test examples in `test.txt` to play with it.
The test file is tab-separated, the first column is additional positions you want to mask for the right-side sentence (useful for masking tokens in math markups). A zero means no additional mask positions.
### Example output

### Upload to huggingface
This repo is hosted on [Github](https://github.com/approach0/azbert), and only mirrored at [huggingface](https://huggingface.co/castorini/azbert-base).
To upload to huggingface, use the `upload2hgf.sh` script.
Before runnig this script, be sure to check:
* check points for model and tokenizer are created under `./ckpt` folder
* model contains all the files needed: `config.json` and `pytorch_model.bin`
* tokenizer contains all the files needed: `added_tokens.json`, `special_tokens_map.json`, `tokenizer_config.json`, `vocab.txt` and `tokenizer.json`
* no `tokenizer_file` field in `tokenizer_config.json` (sometimes it is located locally at `~/.cache`)
* `git-lfs` is installed
* having git-remote named `hgf` reference to `https://huggingface.co/castorini/azbert-base`
|
castorini/dkrr-dpr-tqa-retriever | 1000ac32dcca2a464232493cded5b54b604951ab | 2022-02-13T17:57:26.000Z | [
"pytorch",
"bert",
"arxiv:2012.04584",
"transformers"
] | null | false | castorini | null | castorini/dkrr-dpr-tqa-retriever | 3 | null | transformers | 21,170 | This model is converted from the original DKRR [repo](https://github.com/facebookresearch/FiD) and ported into Pyserini:
```
@misc{izacard2020distilling,
title={Distilling Knowledge from Reader to Retriever for Question Answering},
author={Gautier Izacard and Edouard Grave},
year={2020},
eprint={2012.04584},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
castorini/unicoil-noexp-msmarco-passage | 5e81be56f0ab665168ca8a5fbbedf27888989028 | 2021-07-13T22:31:12.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | castorini | null | castorini/unicoil-noexp-msmarco-passage | 3 | null | transformers | 21,171 | Entry not found |
celtics1863/env-bert-chinese | 657e49b80321a680489b5975f32118f745e54f77 | 2022-01-10T07:16:25.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"transformers",
"pretrain",
"environment",
"autotrain_compatible"
] | fill-mask | false | celtics1863 | null | celtics1863/env-bert-chinese | 3 | null | transformers | 21,172 | ---
language: zh
widget:
- text: "总[MASK]是水环境中的重要污染物。"
- text: "气[MASK]变化是重要的全球环境问题。"
tags:
- pretrain
- pytorch
- environment
---
环境领域的中文预训练Bert模型,在hlf/chinese-bert-wwm-ext的基础上进行训练,旨在学习到中文表达后进一步学习到环境领域的专业知识。
1.5G的预训练语料包括水环境、大气环境、土壤环境、气候变化、中文期刊、国家政策等内容。
项目正在进行中,后续会陆续更新相关内容。
清华大学环境学院课题组
有相关需求、建议,联系[email protected] |
centon21/DialoGPT-small-harrypotter | 2f8f3e554993f3be8a83a85243a7a869e34036f3 | 2021-08-28T17:03:26.000Z | [
"pytorch",
"conversational"
] | conversational | false | centon21 | null | centon21/DialoGPT-small-harrypotter | 3 | null | null | 21,173 | ---
tags:
- conversational
---
#Harry Potter DialoGPT Model |
cestwc/roberta-base-unigram-quaternary | b47fadd806a32d9e21a6dd4d3101f3accf9082ca | 2021-12-06T17:12:50.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | cestwc | null | cestwc/roberta-base-unigram-quaternary | 3 | null | transformers | 21,174 | Entry not found |
chamodkarunasena/DialoGPT-medium-sokka | b0ad5a64bae93e3fc41d37bf521485444c094052 | 2021-09-02T10:56:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | chamodkarunasena | null | chamodkarunasena/DialoGPT-medium-sokka | 3 | null | transformers | 21,175 | ---
tags:
- conversational
---
# Sokka DialoGPT Model |
cimm-kzn/rudr-bert | 78a40d4b4fcea00406a72d33ee80cefbeeb4e83f | 2020-12-14T14:50:15.000Z | [
"pytorch",
"arxiv:2004.03659",
"transformers"
] | null | false | cimm-kzn | null | cimm-kzn/rudr-bert | 3 | 2 | transformers | 21,176 | ## RuDR-BERT
RuDR-BERT - Multilingual, Cased, which pretrained on the raw part of the RuDReC corpus (1.4M reviews). Pre-training was based on the [original BERT code](https://github.com/google-research/bert) provided by Google. In particular, Multi-BERT was for used for initialization; vocabulary of Russian subtokens and parameters are the same as in Multi-BERT. Training details are described in our paper. \
link: https://yadi.sk/d/-PTn0xhk1PqvgQ
## Citing & Authors
If you find this repository helpful, feel free to cite our publication:
[1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews.
preprint: https://arxiv.org/abs/2004.03659
```
@article{10.1093/bioinformatics/btaa675,
author = {Tutubalina, Elena and Alimova, Ilseyar and Miftahutdinov, Zulfat and Sakhovskiy, Andrey and Malykh, Valentin and Nikolenko, Sergey},
title = "{The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews}",
journal = {Bioinformatics},
year = {2020},
month = {07},
issn = {1367-4803},
doi = {10.1093/bioinformatics/btaa675},
url = {https://doi.org/10.1093/bioinformatics/btaa675},
note = {btaa675},
eprint = {https://academic.oup.com/bioinformatics/advance-article-pdf/doi/10.1093/bioinformatics/btaa675/33539752/btaa675.pdf},
}
```
[2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.
[link to paper](https://www.researchgate.net/profile/Elena_Tutubalina/publication/323751823_Using_semantic_analysis_of_texts_for_the_identification_of_drugs_with_similar_therapeutic_effects/links/5bf7cfc3299bf1a0202cbc1f/Using-semantic-analysis-of-texts-for-the-identification-of-drugs-with-similar-therapeutic-effects.pdf)
```
@article{tutubalina2017using,
title={Using semantic analysis of texts for the identification of drugs with similar therapeutic effects},
author={Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE},
journal={Russian Chemical Bulletin},
volume={66},
number={11},
pages={2180--2189},
year={2017},
publisher={Springer}
}
``` |
ck46/t5-small-hotpot-qa-qg | f426fd5c7e70e90c30d31c903c8ef7f45cb14c86 | 2021-12-24T15:03:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ck46 | null | ck46/t5-small-hotpot-qa-qg | 3 | null | transformers | 21,177 | Entry not found |
ck46/t5-small-squad-qa-qg | 7335da7220825393777ff1b0149deb5476e4d8f3 | 2021-12-24T15:05:26.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ck46 | null | ck46/t5-small-squad-qa-qg | 3 | null | transformers | 21,178 | Entry not found |
cl-nagoya/defsent-bert-base-uncased-cls | b62d0ec5a3049e9a9cb0131b53514cc29cf05c08 | 2021-08-02T16:48:52.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cl-nagoya | null | cl-nagoya/defsent-bert-base-uncased-cls | 3 | null | transformers | 21,179 | Entry not found |
cl-nagoya/defsent-roberta-base-cls | 6b1c0d30b8a3cb5aba423ebf6b76912a261a911b | 2021-08-05T05:47:35.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cl-nagoya | null | cl-nagoya/defsent-roberta-base-cls | 3 | null | transformers | 21,180 | Entry not found |
cl-nagoya/defsent-roberta-base-max | a7488ee4e3325d768d100d7af7fb033a76fb1779 | 2021-08-05T05:47:55.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cl-nagoya | null | cl-nagoya/defsent-roberta-base-max | 3 | null | transformers | 21,181 | Entry not found |
clapika2010/test-model | cf451eeaa6ce3ed106e35dfb66c09b432b38076f | 2022-02-14T08:51:23.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | clapika2010 | null | clapika2010/test-model | 3 | null | transformers | 21,182 | Entry not found |
clarin-pl/herbert-kgr10 | 14b918d4dcb2c0a5ae1d45b067b58a82c56fb965 | 2021-08-09T22:52:40.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | clarin-pl | null | clarin-pl/herbert-kgr10 | 3 | null | transformers | 21,183 | Entry not found |
claudelkros/T5_french_wiki_summarizer | f4051cb4ef8970eb43007f589a2b56978fbf80a5 | 2021-06-23T12:00:16.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | claudelkros | null | claudelkros/T5_french_wiki_summarizer | 3 | null | transformers | 21,184 | ��h e e l o
|
codealtgeek/DiabloGPT-medium-rickmorty | 9800188b78cacd6919cc612418fc0c516f52ec21 | 2021-11-16T23:24:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | codealtgeek | null | codealtgeek/DiabloGPT-medium-rickmorty | 3 | null | transformers | 21,185 | ---
tags:
- conversational
---
# Rick Morty DialoGPT Model |
codingJacob/dummy-model | 388dc4fdfeb6c292c16cd0d75d653db0c7e4ea02 | 2021-07-21T05:18:47.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | codingJacob | null | codingJacob/dummy-model | 3 | null | transformers | 21,186 | Entry not found |
coldfir3/bert-base-uncased-issues-128 | 4a8ead57c1c910a294d71428e546a5114479b08f | 2022-01-03T20:38:45.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | coldfir3 | null | coldfir3/bert-base-uncased-issues-128 | 3 | null | transformers | 21,187 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0975 | 1.0 | 291 | 1.7060 |
| 1.648 | 2.0 | 582 | 1.4280 |
| 1.4837 | 3.0 | 873 | 1.3980 |
| 1.3978 | 4.0 | 1164 | 1.4040 |
| 1.3314 | 5.0 | 1455 | 1.2032 |
| 1.2954 | 6.0 | 1746 | 1.2814 |
| 1.2448 | 7.0 | 2037 | 1.2635 |
| 1.1983 | 8.0 | 2328 | 1.2071 |
| 1.1849 | 9.0 | 2619 | 1.1675 |
| 1.1414 | 10.0 | 2910 | 1.2095 |
| 1.1314 | 11.0 | 3201 | 1.1858 |
| 1.0943 | 12.0 | 3492 | 1.1658 |
| 1.0838 | 13.0 | 3783 | 1.2336 |
| 1.0733 | 14.0 | 4074 | 1.1606 |
| 1.0627 | 15.0 | 4365 | 1.1188 |
| 1.055 | 16.0 | 4656 | 1.2500 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
comacrae/roberta-unaugmentedv3 | c381a58522c6d3d5a659b90caccba3bb5c82202a | 2022-02-23T00:03:59.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | comacrae | null | comacrae/roberta-unaugmentedv3 | 3 | null | transformers | 21,188 | Entry not found |
comodoro/wav2vec2-xls-r-300m-cs-cv8 | 000061100df97ec1bb3f46ca7e7fe0782fe7c836 | 2022-03-24T11:52:03.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"cs",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | comodoro | null | comodoro/wav2vec2-xls-r-300m-cs-cv8 | 3 | null | transformers | 21,189 | ---
language:
- cs
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- xlsr-fine-tuning-week
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: Czech comodoro Wav2Vec2 XLSR 300M CV8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: cs
metrics:
- name: Test WER
type: wer
value: 10.3
- name: Test CER
type: cer
value: 2.6
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: cs
metrics:
- name: Test WER
type: wer
value: 54.29
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: cs
metrics:
- name: Test WER
type: wer
value: 44.55
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-cs-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset.
It achieves the following results on the evaluation set while training:
- Loss: 0.2327
- Wer: 0.1608
- Cer: 0.0376
The `eval.py` script results using a LM are:
WER: 0.10281503199350225
CER: 0.02622802241689026
## Model description
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "cs", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-cv8")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-cv8")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated using the attached `eval.py` script:
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-cs-cv8 --dataset mozilla-foundation/common-voice_8_0 --split test --config cs
```
## Training and evaluation data
The Common Voice 8.0 `train` and `validation` datasets were used for training
## Training procedure
### Training hyperparameters
The following hyperparameters were used during first stage of training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 640
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 150
- mixed_precision_training: Native AMP
The following hyperparameters were used during second stage of training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 640
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 7.2926 | 8.06 | 250 | 3.8497 | 1.0 | 1.0 |
| 3.417 | 16.13 | 500 | 3.2852 | 1.0 | 0.9857 |
| 2.0264 | 24.19 | 750 | 0.7099 | 0.7342 | 0.1768 |
| 0.4018 | 32.25 | 1000 | 0.6188 | 0.6415 | 0.1551 |
| 0.2444 | 40.32 | 1250 | 0.6632 | 0.6362 | 0.1600 |
| 0.1882 | 48.38 | 1500 | 0.6070 | 0.5783 | 0.1388 |
| 0.153 | 56.44 | 1750 | 0.6425 | 0.5720 | 0.1377 |
| 0.1214 | 64.51 | 2000 | 0.6363 | 0.5546 | 0.1337 |
| 0.1011 | 72.57 | 2250 | 0.6310 | 0.5222 | 0.1224 |
| 0.0879 | 80.63 | 2500 | 0.6353 | 0.5258 | 0.1253 |
| 0.0782 | 88.7 | 2750 | 0.6078 | 0.4904 | 0.1127 |
| 0.0709 | 96.76 | 3000 | 0.6465 | 0.4960 | 0.1154 |
| 0.0661 | 104.82 | 3250 | 0.6622 | 0.4945 | 0.1166 |
| 0.0616 | 112.89 | 3500 | 0.6440 | 0.4786 | 0.1104 |
| 0.0579 | 120.95 | 3750 | 0.6815 | 0.4887 | 0.1144 |
| 0.0549 | 129.03 | 4000 | 0.6603 | 0.4780 | 0.1105 |
| 0.0527 | 137.09 | 4250 | 0.6652 | 0.4749 | 0.1090 |
| 0.0506 | 145.16 | 4500 | 0.6958 | 0.4846 | 0.1133 |
Further fine-tuning with slightly different architecture and higher learning rate:
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.576 | 8.06 | 250 | 0.2411 | 0.2340 | 0.0502 |
| 0.2564 | 16.13 | 500 | 0.2305 | 0.2097 | 0.0492 |
| 0.2018 | 24.19 | 750 | 0.2371 | 0.2059 | 0.0494 |
| 0.1549 | 32.25 | 1000 | 0.2298 | 0.1844 | 0.0435 |
| 0.1224 | 40.32 | 1250 | 0.2288 | 0.1725 | 0.0407 |
| 0.1004 | 48.38 | 1500 | 0.2327 | 0.1608 | 0.0376 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
cooelf/limitbert | ab9655960a805efe3e2b98363d7e89e9ecde4bd6 | 2020-12-11T21:36:18.000Z | [
"pytorch",
"arxiv:1910.14296",
"transformers"
] | null | false | cooelf | null | cooelf/limitbert | 3 | null | transformers | 21,190 | # LIMIT-BERT
Code and model for the *EMNLP 2020 Findings* paper:
[LIMIT-BERT: Linguistic Informed Multi-task BERT](https://arxiv.org/abs/1910.14296))
## Contents
1. [Requirements](#Requirements)
2. [Training](#Training)
## Requirements
* Python 3.6 or higher.
* Cython 0.25.2 or any compatible version.
* [PyTorch](http://pytorch.org/) 1.0.0+.
* [EVALB](http://nlp.cs.nyu.edu/evalb/). Before starting, run `make` inside the `EVALB/` directory to compile an `evalb` executable. This will be called from Python for evaluation.
* [pytorch-transformers](https://github.com/huggingface/pytorch-transformers) PyTorch 1.0.0+ or any compatible version.
#### Pre-trained Models (PyTorch)
The following pre-trained models are available for download from Google Drive:
* [`LIMIT-BERT`](https://drive.google.com/open?id=1fm0cK2A91iLG3lCpwowCCQSALnWS2X4i):
PyTorch version, same setting with BERT-Large-WWM,loading model with [pytorch-transformers](https://github.com/huggingface/pytorch-transformers).
## How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cooelf/limitbert")
model = AutoModel.from_pretrained("cooelf/limitbert")
```
Please see our original repo for the training scripts.
https://github.com/cooelf/LIMIT-BERT
## Training
To train LIMIT-BERT, simply run:
```
sh run_limitbert.sh
```
### Evaluation Instructions
To test after setting model path:
```
sh test_bert.sh
```
## Citation
```
@article{zhou2019limit,
title={{LIMIT-BERT}: Linguistic informed multi-task {BERT}},
author={Zhou, Junru and Zhang, Zhuosheng and Zhao, Hai},
journal={arXiv preprint arXiv:1910.14296},
year={2019}
}
``` |
copq1/roberta_klue_v0.1 | bcfaf0c3a9635361b6cdbe1478cba221c2b13bea | 2021-10-31T08:31:02.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | copq1 | null | copq1/roberta_klue_v0.1 | 3 | null | transformers | 21,191 | Entry not found |
cosmicray001/small-harry | 1228debd698f566d3a5b0acd876e9d89d75c8047 | 2021-08-28T12:18:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | cosmicray001 | null | cosmicray001/small-harry | 3 | null | transformers | 21,192 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
cowTodd/adalm-bio-base | 6de6a55a4b5f9004acc6e2d2bf3bf79bfb5e7004 | 2021-09-18T05:47:33.000Z | [
"pytorch",
"transformers"
] | null | false | cowTodd | null | cowTodd/adalm-bio-base | 3 | null | transformers | 21,193 | Entry not found |
crabz/bertoslav-limited-ner | 7e050ecb277fa7e8539b622032353655ba5946ef | 2022-03-06T12:29:42.000Z | [
"pytorch",
"distilbert",
"token-classification",
"sk",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | crabz | null | crabz/bertoslav-limited-ner | 3 | null | transformers | 21,194 | ---
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
inference: false
language:
- sk
model-index:
- name: bertoslav-limited-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann sk
type: wikiann
args: sk
metrics:
- name: Precision
type: precision
value: 0.8985571260306242
- name: Recall
type: recall
value: 0.9173994738819993
- name: F1
type: f1
value: 0.9078805459481573
- name: Accuracy
type: accuracy
value: 0.9700235061239639
---
# Named Entity Recognition based on bertoslav-limited
This model is a fine-tuned version of [crabz/bertoslav-limited](https://huggingface.co/crabz/bertoslav-limited) on the Slovak wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2119
- Precision: 0.8986
- Recall: 0.9174
- F1: 0.9079
- Accuracy: 0.9700
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2953 | 1.0 | 834 | 0.1516 | 0.8413 | 0.8647 | 0.8529 | 0.9549 |
| 0.0975 | 2.0 | 1668 | 0.1304 | 0.8787 | 0.9056 | 0.8920 | 0.9658 |
| 0.0487 | 3.0 | 2502 | 0.1405 | 0.8916 | 0.8958 | 0.8937 | 0.9660 |
| 0.025 | 4.0 | 3336 | 0.1658 | 0.8850 | 0.9116 | 0.8981 | 0.9669 |
| 0.0161 | 5.0 | 4170 | 0.1739 | 0.8974 | 0.9127 | 0.9050 | 0.9693 |
| 0.0074 | 6.0 | 5004 | 0.1888 | 0.8900 | 0.9144 | 0.9020 | 0.9687 |
| 0.0051 | 7.0 | 5838 | 0.1996 | 0.8946 | 0.9145 | 0.9044 | 0.9693 |
| 0.0039 | 8.0 | 6672 | 0.2052 | 0.8993 | 0.9158 | 0.9075 | 0.9697 |
| 0.0024 | 9.0 | 7506 | 0.2112 | 0.8946 | 0.9171 | 0.9057 | 0.9696 |
| 0.0018 | 10.0 | 8340 | 0.2119 | 0.8986 | 0.9174 | 0.9079 | 0.9700 |
### Framework versions
- Transformers 4.14.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
cristinakuo/wav2vec2-latino40 | adc48cae0aa637a8518cb6ed933334e42d1152fd | 2022-06-01T15:53:47.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cristinakuo | null | cristinakuo/wav2vec2-latino40 | 3 | null | transformers | 21,195 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-latino40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-latino40
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8795
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.6846 | 0.83 | 100 | 2.9086 | 1.0 |
| 2.8686 | 1.67 | 200 | 2.8922 | 1.0 |
| 2.8805 | 2.5 | 300 | 2.9326 | 1.0 |
| 2.8613 | 3.33 | 400 | 2.8698 | 1.0 |
| 2.8643 | 4.17 | 500 | 2.9027 | 1.0 |
| 2.8688 | 5.0 | 600 | 2.9544 | 1.0 |
| 2.8689 | 5.83 | 700 | 2.8914 | 1.0 |
| 2.8558 | 6.67 | 800 | 2.8762 | 1.0 |
| 2.8537 | 7.5 | 900 | 2.8982 | 1.0 |
| 2.8522 | 8.33 | 1000 | 2.8820 | 1.0 |
| 2.8468 | 9.17 | 1100 | 2.8760 | 1.0 |
| 2.8454 | 10.0 | 1200 | 2.8795 | 1.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
cstorm125/wangchan-deberta_v1-base-wiki-20210520-news-spm-finetune-qa | 5e68aab09218d6c9b066d889a0d67c0862c23a61 | 2021-07-14T07:45:06.000Z | [
"pytorch",
"deberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | cstorm125 | null | cstorm125/wangchan-deberta_v1-base-wiki-20210520-news-spm-finetune-qa | 3 | null | transformers | 21,196 | ---
widget:
- text: "สวนกุหลาบเป็นโรงเรียนอะไร"
context: "โรงเรียนสวนกุหลาบวิทยาลัย (Suankularb Wittayalai School) (อักษรย่อ : ส.ก. / S.K.) เป็นโรงเรียนชายล้วน ระดับชั้นมัธยมศึกษาขนาดใหญ่พิเศษ สังกัดสำนักงานเขตพื้นที่การศึกษามัธยมศึกษาเขต 1 สำนักงานคณะกรรมการการศึกษาขั้นพื้นฐาน (ชื่อเดิม: กรมสามัญศึกษา) กระทรวงศึกษาธิการ ก่อตั้งโดย พระบาทสมเด็จพระจุลจอมเกล้าเจ้าอยู่หัว ได้รับการสถาปนาขึ้นในวันที่ 8 มีนาคม พ.ศ. 2424 (ขณะนั้นนับวันที่ 1 เมษายน เป็นวันขึ้นปีใหม่ เมื่อนับอย่างสากลถือเป็น พ.ศ. 2425) โดยเป็นโรงเรียนรัฐบาลแห่งแรกของประเทศไทย"
---
# wangchan-deberta_v1-base-wiki-20210520-news-spm-finetune-qa
Finetuning `airesearch/wangchan-deberta_v1-base-wiki-20210520-news-spm` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Run with:
```
export MODEL_NAME=wangchan-deberta_v1-base-wiki-20210520-news-spm
CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--revision mlm@ckp-41100 \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--model_max_length 400 \
--pad_on_right \
--fp16 \
--use_auth_token
``` |
cvcio/roberta-el-uncased-twitter-v1 | 060762ce901ff4ea596179e0f29e9e28668588d1 | 2021-06-09T17:16:27.000Z | [
"pytorch",
"roberta",
"fill-mask",
"el",
"transformers",
"twitter",
"Greek",
"autotrain_compatible"
] | fill-mask | false | cvcio | null | cvcio/roberta-el-uncased-twitter-v1 | 3 | null | transformers | 21,197 | ---
language: el
tags:
- roberta
- twitter
- Greek
widget:
- text: "<mask>: μεγαλη υποχωρηση του ιικου φορτιου σε αττικη και θεσσαλονικη"
---
# Greek RoBERTa Uncased (v1)
Pretrained model on Greek language using a masked language modeling (MLM) objective using [Hugging Face's](https://huggingface.co/) [Transformers](https://github.com/huggingface/transformers) library. This model is case-sensitive and has no Greek diacritics (uncased, no-accents).
### Training data
This model was pretrained on almost 18M unique tweets, all Greek, collected between 2008-2021, from almost 450K distinct users.
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50256. For the tokenizer we splited strings containing any numbers (ex. EU2019 ==> EU 2019). The tweet normalization logic described in the example listed bellow.
```python
import unicodedata
from transformers import pipeline
def normalize_tweet(tweet, do_lower = True, do_strip_accents = True, do_split_word_numbers = False, user_fill = '', url_fill = ''):
# your tweet pre-processing logic goes here
# example...
# remove extra spaces, escape HTML, replace non-standard punctuation
# replace any @user with blank
# replace any link with blank
# explode hashtags to strings (ex. #EU2019 ==> EU 2019)
# remove all emojis
# if do_split_word_numbers:
# splited strings containing any numbers
# standardize punctuation
# remove unicode symbols
if do_lower:
tweet = tweet.lower()
if do_strip_accents:
tweet = strip_accents(tweet)
return tweet.strip()
def strip_accents(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
nlp = pipeline('fill-mask', model = 'cvcio/roberta-el-uncased-twitter-v1')
print(
nlp(
normalize_tweet(
'<mask>: Μεγάλη υποχώρηση του ιικού φορτίου σε Αττική και Θεσσαλονίκη'
)
)
)
```
### Pretraining
The model was pretrained on a T4 GPU for 1.2M steps with a batch size of 96 and a sequence length of 96. The optimizer used is Adam with a learning rate of 1e-5, gradient accumulation steps of 8, learning rate warmup for 50000 steps and linear decay of the learning rate after.
### Authors
Dimitris Papaevagelou - [@andefined](https://github.com/andefined)
### About Us
[Civic Information Office](https://cvcio.org/) is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest.
|
cwh/gpt2-medium-finetuned-wikitext2 | 0001362312a61408437ef574afe2ea082f56e298 | 2021-10-28T11:39:40.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | cwh | null | cwh/gpt2-medium-finetuned-wikitext2 | 3 | null | transformers | 21,198 | Entry not found |
cyl/adapter_t5-3b_sst2 | 45f77ce72a2e047679ee65c68094f0a8e1d721b0 | 2022-02-23T05:55:39.000Z | [
"pytorch",
"transformers"
] | null | false | cyl | null | cyl/adapter_t5-3b_sst2 | 3 | null | transformers | 21,199 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.