modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
adam-chell/tweet-sentiment-analyzer | 38f8c456eca52b55ab5a96de8c5294477eacab25 | 2021-12-20T21:30:06.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | adam-chell | null | adam-chell/tweet-sentiment-analyzer | 4 | 1 | transformers | 18,300 | This model has been trained by fine-tuning a BERTweet sentiment classification model named "finiteautomata/bertweet-base-sentiment-analysis", on a labeled positive/negative dataset of tweets.
email : [email protected] |
adamlin/flowscore-speak-model | 0176c03597fd42ae95320602adde6f67504fb08a | 2021-06-29T10:29:34.000Z | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] | feature-extraction | false | adamlin | null | adamlin/flowscore-speak-model | 4 | null | transformers | 18,301 | Entry not found |
adamlin/text-cls | 3ec72b0342cb47c51fc36cf54d696580b9b3654d | 2021-07-24T06:53:32.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | adamlin | null | adamlin/text-cls | 4 | null | transformers | 18,302 | Entry not found |
adamlin/usr-topicalchat-roberta_ft | c9f6e528af2dd0c0ee91e774cedfd94a1f4aa6d4 | 2021-06-28T12:58:44.000Z | [
"pytorch",
"transformers"
] | null | false | adamlin | null | adamlin/usr-topicalchat-roberta_ft | 4 | null | transformers | 18,303 | Entry not found |
addy88/wav2vec2-base-finetuned-ks | 1b5be4d2c76953a96fa189d224a89c5d153faad6 | 2021-12-12T03:41:52.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:superb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | addy88 | null | addy88/wav2vec2-base-finetuned-ks | 4 | null | transformers | 18,304 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1339
- Accuracy: 0.9768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.102 | 1.0 | 399 | 1.0087 | 0.6574 |
| 0.5228 | 2.0 | 798 | 0.4266 | 0.9247 |
| 0.3222 | 3.0 | 1197 | 0.2037 | 0.9744 |
| 0.2096 | 4.0 | 1596 | 0.1444 | 0.9766 |
| 0.2003 | 5.0 | 1995 | 0.1339 | 0.9768 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
addy88/wav2vec2-bhojpuri-stt | d20a1338941d83dae02822ac66c5c5daa6529325 | 2021-12-19T16:48:06.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | addy88 | null | addy88/wav2vec2-bhojpuri-stt | 4 | null | transformers | 18,305 | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-bhojpuri-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-bhojpuri-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
addy88/wav2vec2-marathi-stt | c8995d4d2378a556b2766011e899df7b18fcd6e4 | 2021-12-19T16:31:22.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | addy88 | null | addy88/wav2vec2-marathi-stt | 4 | null | transformers | 18,306 | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-marathi-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-marathi-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
addy88/wav2vec2-nepali-stt | ab8793fa3cef0534e86fc2d7aa8114074a178075 | 2021-12-19T15:36:06.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | addy88 | null | addy88/wav2vec2-nepali-stt | 4 | 1 | transformers | 18,307 | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-nepali-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-nepali-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
aditeyabaral/finetuned-iitp_pdt_review-additionalpretrained-distilbert-base-cased | 8fe0decf5e90a4c112b7bbe8f87dad2bea83c718 | 2021-11-22T15:50:57.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | aditeyabaral | null | aditeyabaral/finetuned-iitp_pdt_review-additionalpretrained-distilbert-base-cased | 4 | null | transformers | 18,308 | Entry not found |
aditeyabaral/finetuned-iitp_pdt_review-additionalpretrained-indic-bert | ca58d9401a4a5248493197f95cc4de7411f33d4a | 2021-11-22T16:34:37.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | aditeyabaral | null | aditeyabaral/finetuned-iitp_pdt_review-additionalpretrained-indic-bert | 4 | null | transformers | 18,309 | Entry not found |
aditeyabaral/sentencetransformer-bert-hinglish-big | 4e8f397faaa43c4e25867c823c4ccc11805a8598 | 2021-10-19T19:38:38.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | aditeyabaral | null | aditeyabaral/sentencetransformer-bert-hinglish-big | 4 | null | sentence-transformers | 18,310 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-bert-hinglish-big
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-bert-hinglish-big')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-big')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-big')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-bert-hinglish-big)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
aditeyabaral/sentencetransformer-xlm-roberta-base | 53ae3ffe3c7115a3dde45640b284f84f1417c83d | 2021-10-24T04:56:00.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | aditeyabaral | null | aditeyabaral/sentencetransformer-xlm-roberta-base | 4 | null | sentence-transformers | 18,311 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-xlm-roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-xlm-roberta-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-xlm-roberta-base')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-xlm-roberta-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-xlm-roberta-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
adresgezgini/Wav2Vec2-tr-AG-v1 | eb886f55cf5d1c070cc74779955aef6c904f712a | 2022-02-25T08:02:34.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | adresgezgini | null | adresgezgini/Wav2Vec2-tr-AG-v1 | 4 | null | transformers | 18,312 | ```python
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("adresgezgini/Wav2Vec-tr-AG-v1")
model = Wav2Vec2ForCTC.from_pretrained("adresgezgini/Wav2Vec-tr-AG-v1")
```
Dosyalar bölümünde paylaşılan ses1.mp3[1], ses1.mp3[2] ve ses1.mp3[3] ses dosyaları açık kaynaklı canlı kitap ses kayıtları üzerinden 1 - 1.5 dakika arasında belli bir kısmın alınması ile oluşturulmuştur. Oluşturulan sesler ile model test edilmiş ve WER değerleri kaydedilmiştir.
<div align="center">
|Sesler|WER|
| :---: | :---: |
|SES1.mp3|0,17|
|SES2.mp3|0,31|
|SES3.mp3|0,20|
</div>
[1][Sabahattin Ali - Çaydanlık | YT: Sesli Kitap Dünyası](https://www.youtube.com/watch?v=IHUfOpqw-8s)\
[2][Sabahattin Ali - Ses | YT: Sesli Kitap Dünyası](https://www.youtube.com/watch?v=XzX2wBjncOg)\
[3][Sabahattin Ali - Sıçra Köşk | YT: Sesli Kitap Dünyası](https://www.youtube.com/watch?v=SJwUaq0Nu9c)\ |
adriansyahdr/adrBert-base-p1 | 91d2018e0b56eebcb64dc8113dfa52edf4b3c8b9 | 2021-05-18T23:10:07.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | adriansyahdr | null | adriansyahdr/adrBert-base-p1 | 4 | null | transformers | 18,313 | Entry not found |
afreireosorio/opus-mt-en-de-finetuned-en-to-de | 161ad0fe3fd95712f4df23bd4716cfcfa5830afe | 2021-12-04T17:43:39.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | afreireosorio | null | afreireosorio/opus-mt-en-de-finetuned-en-to-de | 4 | null | transformers | 18,314 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-de-finetuned-en-to-de
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 26.4396
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-de-finetuned-en-to-de
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6798
- Bleu: 26.4396
- Gen Len: 24.8156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 2.0864 | 1.0 | 568611 | 1.6798 | 26.4396 | 24.8156 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.0.dev20210415+cu101
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aheba31/test-predictor | 413fe61ca0b77a0689aefe5834aa851b508e1977 | 2021-11-04T13:44:28.000Z | [
"en",
"dataset:voxceleb",
"arxiv:2106.04624",
"speechbrain",
"embeddings",
"Speaker",
"Verification",
"Identification",
"pytorch",
"ECAPA",
"TDNN",
"license:apache-2.0"
] | null | false | aheba31 | null | aheba31/test-predictor | 4 | null | speechbrain | 18,315 | ---
language: "en"
thumbnail:
tags:
- speechbrain
- embeddings
- Speaker
- Verification
- Identification
- pytorch
- ECAPA
- TDNN
license: "apache-2.0"
datasets:
- voxceleb
metrics:
- EER
widget:
- example_title: VoxCeleb Speaker id10003
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
- example_title: VoxCeleb Speaker id10004
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Speaker Verification with ECAPA-TDNN embeddings on Voxceleb
This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain.
The system can be used to extract speaker embeddings as well.
It is trained on Voxceleb 1+ Voxceleb2 training data.
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The model performance on Voxceleb1-test set(Cleaned) is:
| Release | EER(%) | minDCF |
|:-------------:|:--------------:|:--------------:|
| 05-03-21 | 0.69 | 0.08258 |
## Pipeline description
This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
gh repo clone aheba/speechbrain-aheba-contribs
git checkout pretrain_new
pip install -r requirements.txt
pip install --editable .
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Compute your speaker embeddings
```python
import torchaudio
from speechbrain.pretrained import Predictor
classifier = Predictor.import_model(source="aheba31/test-predictor")
signal, fs = torchaudio.load('samples/audio_samples/example1.wav')
embeddings = classifier.encode_batch(signal)
```
### Perform Speaker Verification
```python
from speechbrain.pretrained import SpeakerRecognition
verification = SpeakerRecognition.from_hparams(source="aheba31/test-predictor", savedir="aheba31/test-predictor")
score, prediction = verification.verify_files("speechbrain/spkrec-ecapa-voxceleb/example1.wav", "speechbrain/spkrec-ecapa-voxceleb/example2.flac")
```
The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (aa018540).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/VoxCeleb/SpeakerRec
python train_speaker_embeddings.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1-ahC1xeyPinAHp2oAohL-02smNWO41Cc?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing ECAPA-TDNN
```
@inproceedings{DBLP:conf/interspeech/DesplanquesTD20,
author = {Brecht Desplanques and
Jenthe Thienpondt and
Kris Demuynck},
editor = {Helen Meng and
Bo Xu and
Thomas Fang Zheng},
title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation
in {TDNN} Based Speaker Verification},
booktitle = {Interspeech 2020},
pages = {3830--3834},
publisher = {{ISCA}},
year = {2020},
}
```
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
|
ainize/gpt2-rnm-with-season-1 | a35bf4b3786b5817043c6e5fbeabfc0fbd3f8cfa | 2021-05-21T12:08:00.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ainize | null | ainize/gpt2-rnm-with-season-1 | 4 | null | transformers | 18,316 | ### Model information
Fine tuning data 1: https://www.kaggle.com/andradaolteanu/rickmorty-scripts
Base model: e-tony/gpt2-rnm
Epoch: 3
Train runtime: 7.1779 secs
Loss: 2.5694
Training notebook: [Colab](https://colab.research.google.com/drive/12NvO1SIZevF8ybJqfN9O21I3i9bU1dOO#scrollTo=KUsyn02WWmf5)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
|
airKlizz/bart-large-multi-de-wiki-news | 97ffc6b8a8d99f679ec684a5fb0806d91b9f9225 | 2020-06-10T11:38:23.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/bart-large-multi-de-wiki-news | 4 | null | transformers | 18,317 | Entry not found |
airKlizz/mt5-base-germeval21-toxic-with-task-specific-pretraining | 432b2af4a5c5d44a3349c17bb44d76cab896a506 | 2021-07-12T15:56:07.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/mt5-base-germeval21-toxic-with-task-specific-pretraining | 4 | null | transformers | 18,318 | Entry not found |
airKlizz/t5-base-multi-de-wiki-news | 22357ddc3c2efac0eeea905fff2c26a4dce1084a | 2021-06-23T10:52:40.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/t5-base-multi-de-wiki-news | 4 | null | transformers | 18,319 | Entry not found |
airKlizz/t5-small-multi-combine-wiki-news | c98ad22a5970065eb663e9095098066efc9faa3e | 2021-06-23T11:05:12.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/t5-small-multi-combine-wiki-news | 4 | null | transformers | 18,320 | Entry not found |
akadriu/wav2vec2-large-xlsr-53-Total | 0f5d32b135a1d0118cb4b3d9216ad08f3e310fbc | 2022-02-20T19:03:42.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | akadriu | null | akadriu/wav2vec2-large-xlsr-53-Total | 4 | null | transformers | 18,321 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53-Total
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-Total
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2814
- Wer: 0.2260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.9157 | 0.2 | 400 | 2.8204 | 0.9707 |
| 0.9554 | 0.4 | 800 | 0.5295 | 0.5046 |
| 0.7585 | 0.6 | 1200 | 0.4007 | 0.3850 |
| 0.7288 | 0.8 | 1600 | 0.3632 | 0.3447 |
| 0.6792 | 1.0 | 2000 | 0.3433 | 0.3216 |
| 0.6085 | 1.2 | 2400 | 0.3254 | 0.2928 |
| 0.6225 | 1.4 | 2800 | 0.3161 | 0.2832 |
| 0.6183 | 1.6 | 3200 | 0.3111 | 0.2721 |
| 0.5947 | 1.8 | 3600 | 0.2969 | 0.2615 |
| 0.5953 | 2.0 | 4000 | 0.2912 | 0.2515 |
| 0.5358 | 2.2 | 4400 | 0.2920 | 0.2501 |
| 0.5535 | 2.4 | 4800 | 0.2939 | 0.2538 |
| 0.5408 | 2.6 | 5200 | 0.2854 | 0.2452 |
| 0.5272 | 2.8 | 5600 | 0.2816 | 0.2434 |
| 0.5248 | 3.0 | 6000 | 0.2755 | 0.2354 |
| 0.4923 | 3.2 | 6400 | 0.2795 | 0.2353 |
| 0.489 | 3.4 | 6800 | 0.2767 | 0.2330 |
| 0.4932 | 3.6 | 7200 | 0.2821 | 0.2335 |
| 0.4841 | 3.8 | 7600 | 0.2756 | 0.2349 |
| 0.4794 | 4.0 | 8000 | 0.2751 | 0.2265 |
| 0.444 | 4.2 | 8400 | 0.2809 | 0.2283 |
| 0.4533 | 4.4 | 8800 | 0.2804 | 0.2312 |
| 0.4563 | 4.6 | 9200 | 0.2830 | 0.2256 |
| 0.4498 | 4.8 | 9600 | 0.2819 | 0.2251 |
| 0.4532 | 5.0 | 10000 | 0.2814 | 0.2260 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
akahana/indonesia-emotion-distilbert | 49fe98a8c805f66dd7b6638cba6c9a257ae63b3a | 2021-12-08T09:54:22.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | akahana | null | akahana/indonesia-emotion-distilbert | 4 | null | transformers | 18,322 | Entry not found |
akahana/indonesia-emotion-roberta-small | 3a808d7101bab0fad927c71f8f721cc4418da255 | 2021-12-08T07:44:26.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | akahana | null | akahana/indonesia-emotion-roberta-small | 4 | null | transformers | 18,323 | Entry not found |
akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab-final | de36f49142eab0cd8bace4230c1443d739053400 | 2021-12-22T01:26:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | akashsivanandan | null | akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab-final | 4 | null | transformers | 18,324 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-tamil-colab-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tamil-colab-final
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7539
- Wer: 0.6135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 11.1466 | 1.0 | 118 | 4.3444 | 1.0 |
| 3.4188 | 2.0 | 236 | 3.2496 | 1.0 |
| 2.8617 | 3.0 | 354 | 1.6165 | 1.0003 |
| 0.958 | 4.0 | 472 | 0.7984 | 0.8720 |
| 0.5929 | 5.0 | 590 | 0.6733 | 0.7831 |
| 0.4628 | 6.0 | 708 | 0.6536 | 0.7621 |
| 0.3834 | 7.0 | 826 | 0.6037 | 0.7155 |
| 0.3242 | 8.0 | 944 | 0.6376 | 0.7184 |
| 0.2736 | 9.0 | 1062 | 0.6214 | 0.7070 |
| 0.2433 | 10.0 | 1180 | 0.6158 | 0.6944 |
| 0.2217 | 11.0 | 1298 | 0.6548 | 0.6830 |
| 0.1992 | 12.0 | 1416 | 0.6331 | 0.6775 |
| 0.1804 | 13.0 | 1534 | 0.6644 | 0.6874 |
| 0.1639 | 14.0 | 1652 | 0.6629 | 0.6649 |
| 0.143 | 15.0 | 1770 | 0.6927 | 0.6836 |
| 0.1394 | 16.0 | 1888 | 0.6933 | 0.6888 |
| 0.1296 | 17.0 | 2006 | 0.7039 | 0.6860 |
| 0.1212 | 18.0 | 2124 | 0.7042 | 0.6628 |
| 0.1121 | 19.0 | 2242 | 0.7132 | 0.6475 |
| 0.1069 | 20.0 | 2360 | 0.7423 | 0.6438 |
| 0.1063 | 21.0 | 2478 | 0.7171 | 0.6484 |
| 0.1025 | 22.0 | 2596 | 0.7396 | 0.6451 |
| 0.0946 | 23.0 | 2714 | 0.7400 | 0.6432 |
| 0.0902 | 24.0 | 2832 | 0.7385 | 0.6286 |
| 0.0828 | 25.0 | 2950 | 0.7368 | 0.6286 |
| 0.079 | 26.0 | 3068 | 0.7471 | 0.6306 |
| 0.0747 | 27.0 | 3186 | 0.7524 | 0.6201 |
| 0.0661 | 28.0 | 3304 | 0.7576 | 0.6201 |
| 0.0659 | 29.0 | 3422 | 0.7579 | 0.6130 |
| 0.0661 | 30.0 | 3540 | 0.7539 | 0.6135 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
akaushik1/DialoGPT-small-kaiser | 3f84db44e84ae9f50ed8d51c4e96d8d593239ce8 | 2021-09-28T02:05:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | akaushik1 | null | akaushik1/DialoGPT-small-kaiser | 4 | null | transformers | 18,325 | ---
tags:
- conversational
---
# Kaiser DialoGPT Model |
akhooli/gpt2-ar-poetry-aub | c875fcd34207dcf5863858f264d154c971537750 | 2021-05-21T12:25:35.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | akhooli | null | akhooli/gpt2-ar-poetry-aub | 4 | null | transformers | 18,326 | Entry not found |
akoksal/MTMB | 849a48ca9fe2a251ccbf5beaf08ce5de222bab54 | 2021-05-18T23:19:20.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | akoksal | null | akoksal/MTMB | 4 | 1 | transformers | 18,327 | Entry not found |
akoshel/made-ai-dungeon-rugpt3-small | 746819dce03622dae2c2b12a45b2731298032a32 | 2021-12-11T11:21:12.000Z | [
"pytorch"
] | null | false | akoshel | null | akoshel/made-ai-dungeon-rugpt3-small | 4 | null | null | 18,328 | Entry not found |
akshara23/Terra-Classification | 692eb114a5526f2a91be7b8688875d2c538096fd | 2021-08-27T15:21:40.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | akshara23 | null | akshara23/Terra-Classification | 4 | null | transformers | 18,329 | Entry not found |
akshara23/distilbert-base-uncased-finetuned-cola | b077e9c050715f9a3427a86d4889f5845e8271dd | 2021-08-27T16:29:05.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | false | akshara23 | null | akshara23/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 18,330 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model_index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
metric:
name: Matthews Correlation
type: matthews_correlation
value: 0.6290322580645161
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0475
- Matthews Correlation: 0.6290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 16 | 1.3863 | 0.0 |
| No log | 2.0 | 32 | 1.2695 | 0.4503 |
| No log | 3.0 | 48 | 1.1563 | 0.6110 |
| No log | 4.0 | 64 | 1.0757 | 0.6290 |
| No log | 5.0 | 80 | 1.0475 | 0.6290 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
akshaychaudhary/distilbert-base-uncased-finetuned-hypertuned-ner | 354f11aba989e4217d4bce122139e047f150a954 | 2022-02-10T07:47:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | akshaychaudhary | null | akshaychaudhary/distilbert-base-uncased-finetuned-hypertuned-ner | 4 | null | transformers | 18,331 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-hypertuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-hypertuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5683
- Precision: 0.3398
- Recall: 0.6481
- F1: 0.4459
- Accuracy: 0.8762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 84 | 0.3566 | 0.2913 | 0.5556 | 0.3822 | 0.8585 |
| No log | 2.0 | 168 | 0.4698 | 0.3366 | 0.6296 | 0.4387 | 0.8730 |
| No log | 3.0 | 252 | 0.5683 | 0.3398 | 0.6481 | 0.4459 | 0.8762 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
alexaapo/greek_legal_bert_v2 | 3d9b92e5eadd0eb757a9b3afb8391f7b2ed50109 | 2021-12-01T10:52:28.000Z | [
"pytorch",
"bert",
"pretraining",
"transformers"
] | null | false | alexaapo | null | alexaapo/greek_legal_bert_v2 | 4 | null | transformers | 18,332 | |
alexcg1/models | 8576c0c1c70a2207a2156127ad7efcce5112f2eb | 2021-05-21T12:59:45.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | alexcg1 | null | alexcg1/models | 4 | null | transformers | 18,333 | Entry not found |
ali2066/finetuned-token-argumentative | be5af01c52a87612fa2e8ab9b9483890490bde3c | 2022-02-15T13:46:55.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/finetuned-token-argumentative | 4 | null | transformers | 18,334 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned-token-argumentative
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-token-argumentative
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1573
- Precision: 0.3777
- Recall: 0.3919
- F1: 0.3847
- Accuracy: 0.9497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 75 | 0.3241 | 0.1109 | 0.2178 | 0.1470 | 0.8488 |
| No log | 2.0 | 150 | 0.3145 | 0.1615 | 0.2462 | 0.1950 | 0.8606 |
| No log | 3.0 | 225 | 0.3035 | 0.1913 | 0.3258 | 0.2411 | 0.8590 |
| No log | 4.0 | 300 | 0.3080 | 0.2199 | 0.3220 | 0.2613 | 0.8612 |
| No log | 5.0 | 375 | 0.3038 | 0.2209 | 0.3277 | 0.2639 | 0.8630 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
alireza7/ARMAN-MSR-persian-base-parsinlu-sentiment-food | 76d86cfa446b7c172eef6fe1827e123dce23dd0e | 2021-09-29T19:15:33.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-MSR-persian-base-parsinlu-sentiment-food | 4 | null | transformers | 18,335 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SH-persian-base-parsinlu-sentiment-food | 1fb83afc77548134f32ea8051b57e67426a70e56 | 2021-09-29T19:18:19.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SH-persian-base-parsinlu-sentiment-food | 4 | null | transformers | 18,336 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SH-persian-base-perkey-title | c092a015ca9189d89b85e3a1156f4063ac35b291 | 2021-09-29T19:19:17.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SH-persian-base-perkey-title | 4 | null | transformers | 18,337 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-100-persian-base-parsinlu-multiple-choice | 2d512c55fc4e7b4ac26a0dc0bd56a176202a07b0 | 2021-09-29T19:20:37.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-100-persian-base-parsinlu-multiple-choice | 4 | null | transformers | 18,338 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-80-persian-base-perkey-title | aa66d03a1d688130997b804d0f34827589d7afa0 | 2021-09-29T19:23:33.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-80-persian-base-perkey-title | 4 | null | transformers | 18,339 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-80-persian-base-tebyan | b617b4e551c7d32e990c3730321532ba5df4cb5f | 2021-09-29T19:23:40.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-80-persian-base-tebyan | 4 | null | transformers | 18,340 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-80-persian-base-wiki-summary | 3839b9b7333d1cafaed9675fe061e40b026f73c3 | 2021-09-29T19:23:55.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-80-persian-base-wiki-summary | 4 | null | transformers | 18,341 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-80-persian-base | 460d07ae4003af193c5b5af4cf098f018f7f78e5 | 2021-09-29T19:24:03.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-80-persian-base | 4 | null | transformers | 18,342 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/TRANSFORMER-persian-base-perkey-title | bd17194654b80cfa06746d2541c2ad47b6d3bbcb | 2021-09-29T19:26:44.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/TRANSFORMER-persian-base-perkey-title | 4 | null | transformers | 18,343 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
allenai/dsp_roberta_base_dapt_news_tapt_hyperpartisan_news_515 | de955c2950b5fc85f16b1538395c8503dc23d146 | 2021-05-20T13:13:11.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
] | null | false | allenai | null | allenai/dsp_roberta_base_dapt_news_tapt_hyperpartisan_news_515 | 4 | null | transformers | 18,344 | Entry not found |
allenai/dsp_roberta_base_dapt_reviews_tapt_amazon_helpfulness_115K | eea946ca98df3e19b8422648081b2ee013b346d9 | 2021-05-20T13:14:40.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
] | null | false | allenai | null | allenai/dsp_roberta_base_dapt_reviews_tapt_amazon_helpfulness_115K | 4 | null | transformers | 18,345 | Entry not found |
allenai/dsp_roberta_base_tapt_ag_115K | 4d1bed63e397b45af824bbd5c78e0eb1e440b15b | 2021-05-20T13:20:47.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
] | null | false | allenai | null | allenai/dsp_roberta_base_tapt_ag_115K | 4 | null | transformers | 18,346 | Entry not found |
allenyummy/chinese-bert-wwm-ehr-ner-qasl | e69a41088e9f0d4e69eeda9bddbdcd483854965b | 2021-05-19T11:42:17.000Z | [
"pytorch",
"bert",
"zh-tw",
"transformers"
] | null | false | allenyummy | null | allenyummy/chinese-bert-wwm-ehr-ner-qasl | 4 | null | transformers | 18,347 | ---
language: zh-tw
---
# Model name
Chinese-bert-wwm-electrical-health-records-ner-question-answering-sequence-labeling
#### How to use
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("allenyummy/chinese-bert-wwm-ehr-ner-qasl")
model = AutoModelForTokenClassification.from_pretrained("allenyummy/chinese-bert-wwm-ehr-ner-qasl")
``` |
alvinkobe/DialoGPT-small-KST | 5d937431dfa8f42908a3b21129901b986703f898 | 2021-10-12T01:21:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | alvinkobe | null | alvinkobe/DialoGPT-small-KST | 4 | null | transformers | 18,348 | ---
tags:
- conversational
---
#PANAFRICAN DialoGPT |
ami-wav2vec2/wav2vec2-base-ami_multi-nithin1 | 060f31b75e29c203ead4de1cf4957d435aaeb3f8 | 2021-10-16T05:05:19.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_multi-nithin1 | 4 | null | transformers | 18,349 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-base-ami_multi-nithin1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ami_multi-nithin1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 9.4710
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 3.1594 | 1.07 | 5000 | 8.2680 | 1.0 |
| 3.1647 | 2.13 | 10000 | 7.7283 | 1.0 |
| 3.152 | 3.2 | 15000 | 8.5267 | 1.0 |
| 3.1738 | 4.27 | 20000 | 7.8057 | 1.0 |
| 3.1628 | 5.33 | 25000 | 8.4358 | 1.0 |
| 3.1314 | 6.4 | 30000 | 8.2546 | 1.0 |
| 3.1772 | 7.46 | 35000 | 8.0952 | 1.0 |
| 3.1504 | 8.53 | 40000 | 8.4454 | 1.0 |
| 3.1598 | 9.6 | 45000 | 8.0497 | 1.0 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_0.0001_16 | b803009b7440b86977e3a309b675e2b2e4a5aa44 | 2021-11-21T16:12:09.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_0.0001_16 | 4 | null | transformers | 18,350 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-large-lv60-ami_multi-tune_0.0001_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-lv60-ami_multi-tune_0.0001_16
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4974
- Wer: 0.4508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.4959 | 1.72 | 1000 | 1.4430 | 0.4864 |
| 1.2059 | 3.45 | 2000 | 1.2716 | 0.4219 |
| 1.0863 | 5.17 | 3000 | 1.2448 | 0.4069 |
| 1.0271 | 6.9 | 4000 | 1.2464 | 0.3996 |
| 0.9656 | 8.62 | 5000 | 1.2551 | 0.4048 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_0.0005_16 | ac10a1b2adddd2be2e878fd88ecd0848628bafd6 | 2021-11-20T06:04:25.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_0.0005_16 | 4 | null | transformers | 18,351 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-large-lv60-ami_multi-tune_0.0005_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-lv60-ami_multi-tune_0.0005_16
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4670
- Wer: 0.4379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.2775 | 1.72 | 1000 | 1.2890 | 0.4219 |
| 1.0728 | 3.45 | 2000 | 1.2016 | 0.4005 |
| 0.9245 | 5.17 | 3000 | 1.1885 | 0.3961 |
| 0.8506 | 6.9 | 4000 | 1.2045 | 0.3909 |
| 0.7202 | 8.62 | 5000 | 1.2507 | 0.3944 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_dropout_0.0003_16 | f539916d869d053281a04657f4f86d809037a6f8 | 2021-11-23T19:58:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_dropout_0.0003_16 | 4 | null | transformers | 18,352 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-large-lv60-ami_multi-tune_dropout_0.0003_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-lv60-ami_multi-tune_dropout_0.0003_16
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4765
- Wer: 0.4223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.5073 | 1.72 | 1000 | 1.3621 | 0.4369 |
| 1.3054 | 3.45 | 2000 | 1.2400 | 0.4000 |
| 1.2056 | 5.17 | 3000 | 1.2068 | 0.3876 |
| 1.1534 | 6.9 | 4000 | 1.1915 | 0.3816 |
| 1.1094 | 8.62 | 5000 | 1.1872 | 0.3770 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_dropout_0.0003_8 | 79e13fe8925283a275614f75c5583224601e46eb | 2021-11-23T04:04:41.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_dropout_0.0003_8 | 4 | null | transformers | 18,353 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-large-lv60-ami_multi-tune_dropout_0.0003_8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-lv60-ami_multi-tune_dropout_0.0003_8
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4292
- Wer: 0.4203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.6596 | 0.86 | 1000 | 1.4908 | 0.5105 |
| 1.4211 | 1.72 | 2000 | 1.3459 | 0.4167 |
| 1.3246 | 2.59 | 3000 | 1.2844 | 0.3992 |
| 1.2588 | 3.45 | 4000 | 1.2392 | 0.3995 |
| 1.2045 | 4.31 | 5000 | 1.2349 | 0.3928 |
| 1.1543 | 5.17 | 6000 | 1.2056 | 0.3886 |
| 1.119 | 6.03 | 7000 | 1.2005 | 0.3793 |
| 1.0984 | 6.9 | 8000 | 1.2024 | 0.3808 |
| 1.0726 | 7.76 | 9000 | 1.1921 | 0.3791 |
| 1.054 | 8.62 | 10000 | 1.1835 | 0.3793 |
| 1.0498 | 9.48 | 11000 | 1.1854 | 0.3743 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
anas-awadalla/bert-small-pretrained-on-squad | 2a31567d8be4c3294c6515f0943e1b73eab3d3ed | 2022-01-27T03:57:07.000Z | [
"pytorch",
"bert",
"fill-mask",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | anas-awadalla | null | anas-awadalla/bert-small-pretrained-on-squad | 4 | null | transformers | 18,354 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert_small_pretrain_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_small_pretrain_squad
This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1410
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anelnurkayeva/autonlp-covid-432211280 | 552a86ab1da2d6c1652ddda5fc2472fc62dd69a4 | 2021-12-20T01:23:47.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:anelnurkayeva/autonlp-data-covid",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | anelnurkayeva | null | anelnurkayeva/autonlp-covid-432211280 | 4 | null | transformers | 18,355 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- anelnurkayeva/autonlp-data-covid
co2_eq_emissions: 8.898145050355591
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 432211280
- CO2 Emissions (in grams): 8.898145050355591
## Validation Metrics
- Loss: 0.12489336729049683
- Accuracy: 0.9520089285714286
- Precision: 0.9436443331246086
- Recall: 0.9747736093143596
- AUC: 0.9910066767410616
- F1: 0.958956411072224
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/anelnurkayeva/autonlp-covid-432211280
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("anelnurkayeva/autonlp-covid-432211280", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("anelnurkayeva/autonlp-covid-432211280", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
angiquer/twitterko-electra-base-discriminator-large | ebf0022d50dd9bff082e6bdbca3ed505f9473bb9 | 2020-07-10T01:48:02.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | angiquer | null | angiquer/twitterko-electra-base-discriminator-large | 4 | null | transformers | 18,356 | Entry not found |
anirudh21/albert-large-v2-finetuned-cola | ffe6ce7a91c444a1116aa9ad406b911735e9e829 | 2022-01-28T04:37:55.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"transformers"
] | text-classification | false | anirudh21 | null | anirudh21/albert-large-v2-finetuned-cola | 4 | null | transformers | 18,357 | Entry not found |
anirudh21/albert-large-v2-finetuned-qnli | 07ab25da7d476cf13244472853692dc818b99651 | 2022-01-28T20:58:54.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"transformers"
] | text-classification | false | anirudh21 | null | anirudh21/albert-large-v2-finetuned-qnli | 4 | null | transformers | 18,358 | Entry not found |
anirudh21/albert-large-v2-finetuned-qqp | 6102b0fbc412d8b03285148efe7e1bcab6d4242a | 2022-01-29T06:28:41.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"transformers"
] | text-classification | false | anirudh21 | null | anirudh21/albert-large-v2-finetuned-qqp | 4 | null | transformers | 18,359 | Entry not found |
anirudh21/albert-large-v2-finetuned-sst2 | df02f259f73ce2859d37ae88cef5334b6bb62860 | 2022-01-28T07:46:04.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"transformers"
] | text-classification | false | anirudh21 | null | anirudh21/albert-large-v2-finetuned-sst2 | 4 | null | transformers | 18,360 | Entry not found |
anirudh21/bert-base-uncased-finetuned-wnli | b4d191c88e704c168045faf70eb8890465ccb0a9 | 2022-01-24T13:33:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | anirudh21 | null | anirudh21/bert-base-uncased-finetuned-wnli | 4 | null | transformers | 18,361 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-wnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6854
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6854 | 0.5634 |
| No log | 2.0 | 80 | 0.6983 | 0.3239 |
| No log | 3.0 | 120 | 0.6995 | 0.5352 |
| No log | 4.0 | 160 | 0.6986 | 0.5634 |
| No log | 5.0 | 200 | 0.6996 | 0.5634 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
anirudh21/distilbert-base-uncased-finetuned-qnli | eca56f705710549d28dd0e9c872a738802448c51 | 2022-01-12T12:39:07.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | anirudh21 | null | anirudh21/distilbert-base-uncased-finetuned-qnli | 4 | null | transformers | 18,362 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6064981949458483
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-qnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8121
- Accuracy: 0.6065
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.6949 | 0.4874 |
| No log | 2.0 | 312 | 0.6596 | 0.5957 |
| No log | 3.0 | 468 | 0.7186 | 0.5812 |
| 0.6026 | 4.0 | 624 | 0.7727 | 0.6029 |
| 0.6026 | 5.0 | 780 | 0.8121 | 0.6065 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anirudh21/distilbert-base-uncased-finetuned-rte | 518ac617cb4e390057e0221739ba2b1aadf91c45 | 2022-01-12T11:32:17.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | anirudh21 | null | anirudh21/distilbert-base-uncased-finetuned-rte | 4 | null | transformers | 18,363 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6173285198555957
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-rte
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6661
- Accuracy: 0.6173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.6921 | 0.5162 |
| No log | 2.0 | 312 | 0.6661 | 0.6173 |
| No log | 3.0 | 468 | 0.7794 | 0.5632 |
| 0.5903 | 4.0 | 624 | 0.8832 | 0.5921 |
| 0.5903 | 5.0 | 780 | 0.9376 | 0.5921 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anton-l/megatron-11b | fbbd34d81ef3eef9fd0dde5274e36f4f158b9934 | 2021-02-20T13:39:44.000Z | [
"pytorch",
"megatron",
"text-generation",
"transformers"
] | text-generation | false | anton-l | null | anton-l/megatron-11b | 4 | 3 | transformers | 18,364 | Entry not found |
anton-l/wav2vec2-large-xlsr-53-sakha | 5ce17ed2559fa5e9a92b3367afc4011fce1b33d9 | 2021-07-05T20:31:23.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"sah",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/wav2vec2-large-xlsr-53-sakha | 4 | null | transformers | 18,365 | ---
language: sah
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Sakha XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice sah
type: common_voice
args: sah
metrics:
- name: Test WER
type: wer
value: 32.23
---
# Wav2Vec2-Large-XLSR-53-Sakha
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Sakha using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sah", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-sakha")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-sakha")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Sakha test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/sah.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-sakha")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-sakha")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/sah/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/sah/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 32.23 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
anton-l/wav2vec2-xls-r-common_voice-tr-ft-100sh | 3bd249f29960f14e5241d1e3f184b66a86ceedfd | 2022-01-30T02:42:22.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/wav2vec2-xls-r-common_voice-tr-ft-100sh | 4 | null | transformers | 18,366 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-common_voice-tr-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-common_voice-tr-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5806
- Wer: 0.3998
- Cer: 0.1053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 0.5369 | 17.0 | 500 | 0.6021 | 0.6366 | 0.1727 |
| 0.3542 | 34.0 | 1000 | 0.5265 | 0.4906 | 0.1278 |
| 0.1866 | 51.0 | 1500 | 0.5805 | 0.4768 | 0.1261 |
| 0.1674 | 68.01 | 2000 | 0.5336 | 0.4518 | 0.1186 |
| 0.19 | 86.0 | 2500 | 0.5676 | 0.4427 | 0.1151 |
| 0.0815 | 103.0 | 3000 | 0.5510 | 0.4268 | 0.1125 |
| 0.0545 | 120.0 | 3500 | 0.5608 | 0.4175 | 0.1099 |
| 0.0299 | 137.01 | 4000 | 0.5875 | 0.4222 | 0.1124 |
| 0.0267 | 155.0 | 4500 | 0.5882 | 0.4026 | 0.1063 |
| 0.025 | 172.0 | 5000 | 0.5806 | 0.3998 | 0.1053 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
anton-l/wav2vec2-xls-r-common_voice-tr-ft-stream | 9208e8a3c7197b5f48b7bad4a2a12ec54594bd3b | 2022-01-31T17:19:19.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/wav2vec2-xls-r-common_voice-tr-ft-stream | 4 | null | transformers | 18,367 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-common_voice-tr-ft-stream
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-common_voice-tr-ft-stream
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3519
- Wer: 0.2927
- Cer: 0.0694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.6768 | 9.01 | 500 | 0.4220 | 0.5143 | 0.1235 |
| 0.3801 | 19.01 | 1000 | 0.3303 | 0.4403 | 0.1055 |
| 0.3616 | 29.0 | 1500 | 0.3540 | 0.3716 | 0.0878 |
| 0.2334 | 39.0 | 2000 | 0.3666 | 0.3671 | 0.0842 |
| 0.3141 | 49.0 | 2500 | 0.3407 | 0.3373 | 0.0819 |
| 0.1926 | 58.01 | 3000 | 0.3886 | 0.3520 | 0.0867 |
| 0.1372 | 68.01 | 3500 | 0.3415 | 0.3189 | 0.0743 |
| 0.091 | 78.0 | 4000 | 0.3750 | 0.3164 | 0.0757 |
| 0.0893 | 88.0 | 4500 | 0.3559 | 0.2968 | 0.0712 |
| 0.095 | 98.0 | 5000 | 0.3519 | 0.2927 | 0.0694 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
anurag0077/distilbert-base-uncased-finetuned-squad | be0410c59b9759274fd452b87e86c2c98a162a93 | 2021-11-05T08:50:27.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | anurag0077 | null | anurag0077/distilbert-base-uncased-finetuned-squad | 4 | null | transformers | 18,368 | Entry not found |
anuragshas/wav2vec2-large-xls-r-300m-or | 543f047314f4e96ba12caabe3583693beab88f9a | 2022-03-24T11:57:41.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"or",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xls-r-300m-or | 4 | 1 | transformers | 18,369 | ---
language:
- or
license: apache-2.0
tags:
- automatic-speech-recognition
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-or
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_7_0
name: Common Voice 7
args: or
metrics:
- type: wer
value: 47.186
name: Test WER
- name: Test CER
type: cer
value: 11.82
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-or
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6618
- Wer: 0.5166
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.12
- num_epochs: 240
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 6.0493 | 23.53 | 400 | 2.9728 | 1.0 |
| 0.5306 | 47.06 | 800 | 1.2895 | 0.6138 |
| 0.1253 | 70.59 | 1200 | 1.6854 | 0.5703 |
| 0.0763 | 94.12 | 1600 | 1.9433 | 0.5870 |
| 0.0552 | 117.65 | 2000 | 1.4393 | 0.5575 |
| 0.0382 | 141.18 | 2400 | 1.4665 | 0.5537 |
| 0.0286 | 164.71 | 2800 | 1.5441 | 0.5320 |
| 0.0212 | 188.24 | 3200 | 1.6502 | 0.5115 |
| 0.0168 | 211.76 | 3600 | 1.6411 | 0.5332 |
| 0.0129 | 235.29 | 4000 | 1.6618 | 0.5166 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-or --dataset mozilla-foundation/common_voice_7_0 --config or --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-or"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "or", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "ପରରାଏ ବାଲା ଗସ୍ତି ଫାଣ୍ଡି ଗୋପାଳ ପରଠାରୁ ଦେଢ଼କଶ ଦୂର"
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 51.92 | 47.186 |
|
anuragshas/wav2vec2-large-xls-r-300m-pa-in | 97bae4bccc21e446e06f8359448429f1876b81e3 | 2022-03-24T11:53:29.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"pa",
"pa-IN",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xls-r-300m-pa-in | 4 | null | transformers | 18,370 | ---
language:
- pa
- pa-IN
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
model-index:
- name: XLS-R-300M - Punjabi
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_7_0
name: Common Voice 7
args: pa-IN
metrics:
- type: wer
value: 45.611
name: Test WER
- name: Test CER
type: cer
value: 15.584
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Punjabi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2548
- Wer: 0.5677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.12
- num_epochs: 120
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 6.4804 | 16.65 | 400 | 1.8461 | 1.0 |
| 0.474 | 33.33 | 800 | 1.1018 | 0.6624 |
| 0.1389 | 49.98 | 1200 | 1.1918 | 0.6103 |
| 0.0919 | 66.65 | 1600 | 1.1889 | 0.6058 |
| 0.0657 | 83.33 | 2000 | 1.2266 | 0.5931 |
| 0.0479 | 99.98 | 2400 | 1.2512 | 0.5902 |
| 0.0355 | 116.65 | 2800 | 1.2548 | 0.5677 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-pa-in --dataset mozilla-foundation/common_voice_7_0 --config pa-IN --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-pa-in"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "pa-IN", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "ਉਨ੍ਹਾਂ ਨੇ ਸਾਰੇ ਤੇਅਰਵੇ ਵੱਖਰੀ ਕਿਸਮ ਦੇ ਕੀਤੇ ਹਨ"
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 51.968 | 45.611 |
|
anuragshas/wav2vec2-xls-r-1b-hi | 0f13426cb386f06dd8dc89b6f2f6d1a695624743 | 2022-03-23T18:29:52.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-xls-r-1b-hi | 4 | 1 | transformers | 18,371 | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-1b-hi-cv7
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_7_0
name: Common Voice 7
args: hi
metrics:
- type: wer
value: 18.504
name: Test WER
- name: Test CER
type: cer
value: 6.655
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-hi-cv7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5878
- Wer: 0.3419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.9859 | 2.72 | 400 | 1.1663 | 0.7948 |
| 1.2969 | 5.44 | 800 | 0.7725 | 0.6562 |
| 1.1954 | 8.16 | 1200 | 0.5940 | 0.4904 |
| 1.164 | 10.88 | 1600 | 0.5338 | 0.4316 |
| 1.1464 | 13.6 | 2000 | 0.5432 | 0.4226 |
| 1.1553 | 16.33 | 2400 | 0.5471 | 0.4260 |
| 1.0985 | 19.05 | 2800 | 0.5290 | 0.4076 |
| 1.0421 | 21.77 | 3200 | 0.5672 | 0.4181 |
| 0.9831 | 24.49 | 3600 | 0.5741 | 0.4141 |
| 0.9827 | 27.21 | 4000 | 0.5754 | 0.4179 |
| 0.9669 | 29.93 | 4400 | 0.5310 | 0.3889 |
| 0.9496 | 32.65 | 4800 | 0.5649 | 0.4062 |
| 0.9112 | 35.37 | 5200 | 0.5738 | 0.3926 |
| 0.8838 | 38.1 | 5600 | 0.5232 | 0.3768 |
| 0.8666 | 40.81 | 6000 | 0.5510 | 0.3852 |
| 0.8366 | 43.54 | 6400 | 0.5436 | 0.3837 |
| 0.7957 | 46.26 | 6800 | 0.5337 | 0.3775 |
| 0.7834 | 48.98 | 7200 | 0.5611 | 0.3844 |
| 0.7685 | 51.7 | 7600 | 0.5710 | 0.4008 |
| 0.7431 | 54.42 | 8000 | 0.5636 | 0.3726 |
| 0.7353 | 57.14 | 8400 | 0.5937 | 0.3836 |
| 0.7001 | 59.86 | 8800 | 0.5815 | 0.3858 |
| 0.6799 | 62.58 | 9200 | 0.5862 | 0.3696 |
| 0.6459 | 65.31 | 9600 | 0.6181 | 0.3762 |
| 0.6121 | 68.03 | 10000 | 0.5637 | 0.3590 |
| 0.5942 | 70.75 | 10400 | 0.6374 | 0.3882 |
| 0.5769 | 73.47 | 10800 | 0.6015 | 0.3640 |
| 0.5689 | 76.19 | 11200 | 0.5669 | 0.3508 |
| 0.5461 | 78.91 | 11600 | 0.5967 | 0.3621 |
| 0.5286 | 81.63 | 12000 | 0.5840 | 0.3605 |
| 0.5057 | 84.35 | 12400 | 0.5848 | 0.3489 |
| 0.482 | 87.07 | 12800 | 0.5860 | 0.3488 |
| 0.4655 | 89.79 | 13200 | 0.5780 | 0.3453 |
| 0.4523 | 92.52 | 13600 | 0.6150 | 0.3532 |
| 0.4422 | 95.24 | 14000 | 0.5930 | 0.3452 |
| 0.4436 | 97.96 | 14400 | 0.5867 | 0.3428 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-1b-hi --dataset mozilla-foundation/common_voice_7_0 --config hi --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-1b-hi"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "hi", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "तुम्हारे पास तीन महीने बचे हैं"
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 28.942 | 18.504 | |
anuragshas/wav2vec2-xls-r-300m-mt-cv8-with-lm | 095796ff39b18c81d9555b8b15bd6ca3d4d6472c | 2022-03-24T11:57:50.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mt",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-xls-r-300m-mt-cv8-with-lm | 4 | null | transformers | 18,372 | ---
language:
- mt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
model-index:
- name: XLS-R-300M - Maltese
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice 8
args: mt
metrics:
- type: wer
value: 15.967
name: Test WER
- name: Test CER
type: cer
value: 3.657
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Maltese
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1895
- Wer: 0.1984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4219 | 3.6 | 400 | 3.3127 | 1.0 |
| 3.0399 | 7.21 | 800 | 3.0330 | 1.0 |
| 1.5756 | 10.81 | 1200 | 0.6108 | 0.5724 |
| 1.0995 | 14.41 | 1600 | 0.3091 | 0.3154 |
| 0.9639 | 18.02 | 2000 | 0.2596 | 0.2841 |
| 0.9032 | 21.62 | 2400 | 0.2270 | 0.2514 |
| 0.8145 | 25.23 | 2800 | 0.2172 | 0.2483 |
| 0.7845 | 28.83 | 3200 | 0.2084 | 0.2333 |
| 0.7694 | 32.43 | 3600 | 0.1974 | 0.2234 |
| 0.7333 | 36.04 | 4000 | 0.2020 | 0.2185 |
| 0.693 | 39.64 | 4400 | 0.1947 | 0.2148 |
| 0.6802 | 43.24 | 4800 | 0.1960 | 0.2102 |
| 0.667 | 46.85 | 5200 | 0.1904 | 0.2072 |
| 0.6486 | 50.45 | 5600 | 0.1881 | 0.2009 |
| 0.6339 | 54.05 | 6000 | 0.1877 | 0.1989 |
| 0.6254 | 57.66 | 6400 | 0.1893 | 0.2003 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-mt-cv8-with-lm --dataset mozilla-foundation/common_voice_8_0 --config mt --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-300m-mt-cv8-with-lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "mt", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "għadu jilagħbu ċirku tant bilfondi"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 19.853 | 15.967 | |
aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616 | 8ff7379ea6a12cd2d201780ee27d78c3c14348c8 | 2021-05-18T23:44:51.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | aodiniz | null | aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616 | 4 | null | transformers | 18,373 | # BERT L-10 H-512 fine-tuned on MLM (CORD-19 2020/06/16)
BERT model with [10 Transformer layers and hidden embedding of size 512](https://huggingface.co/google/bert_uncased_L-10_H-512_A-8), referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962), fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).
## Training the model
```bash
python run_language_modeling.py
--model_type bert
--model_name_or_path google/bert_uncased_L-10_H-512_A-8
--do_train
--train_data_file {cord19-200616-dataset}
--mlm
--mlm_probability 0.2
--line_by_line
--block_size 512
--per_device_train_batch_size 10
--learning_rate 3e-5
--num_train_epochs 2
--output_dir bert_uncased_L-10_H-512_A-8_cord19-200616
|
aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616_squad2 | 4b3ed3df43656a063ebe4ef56f91b0d12ba179b0 | 2021-05-18T23:45:25.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"dataset:squad_v2",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616_squad2 | 4 | null | transformers | 18,374 | ---
datasets:
- squad_v2
---
# BERT L-10 H-512 CORD-19 (2020/06/16) fine-tuned on SQuAD v2.0
BERT model with [10 Transformer layers and hidden embedding of size 512](https://huggingface.co/google/bert_uncased_L-10_H-512_A-8), referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962), [fine-tuned for MLM](https://huggingface.co/aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616) on CORD-19 dataset (as released on 2020/06/16) and fine-tuned for QA on SQuAD v2.0.
## Training the model
```bash
python run_squad.py
--model_type bert
--model_name_or_path aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616
--train_file 'train-v2.0.json'
--predict_file 'dev-v2.0.json'
--do_train
--do_eval
--do_lower_case
--version_2_with_negative
--max_seq_length 384
--per_gpu_train_batch_size 10
--learning_rate 3e-5
--num_train_epochs 2
--output_dir bert_uncased_L-10_H-512_A-8_cord19-200616_squad2
|
aodiniz/bert_uncased_L-2_H-512_A-8_squad2_covid-qna | 55ccc3e2ef3b1f3efd574cce8c0fe882b9b4666c | 2021-05-18T23:50:34.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-2_H-512_A-8_squad2_covid-qna | 4 | null | transformers | 18,375 | Entry not found |
aodiniz/bert_uncased_L-4_H-256_A-4_squad2 | 70c75e68abceec8d55250dd053b7e9116f9a3801 | 2021-05-18T23:52:28.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-4_H-256_A-4_squad2 | 4 | null | transformers | 18,376 | Entry not found |
aodiniz/bert_uncased_L-4_H-256_A-4_squad2_covid-qna | a46dc52710ae960fa68a211263a08022f52353f5 | 2021-05-18T23:52:50.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-4_H-256_A-4_squad2_covid-qna | 4 | null | transformers | 18,377 | Entry not found |
aodiniz/bert_uncased_L-4_H-768_A-12_cord19-200616 | f09e6550e7e5d36923d1a34e5d0d6ba3bdecdd5c | 2021-05-18T23:55:42.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | aodiniz | null | aodiniz/bert_uncased_L-4_H-768_A-12_cord19-200616 | 4 | null | transformers | 18,378 | Entry not found |
aodiniz/bert_uncased_L-4_H-768_A-12_cord19-200616_squad2 | e7a7cfafda107fe022b5bf9d64d1f638d5e2bd7d | 2021-05-18T23:56:16.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-4_H-768_A-12_cord19-200616_squad2 | 4 | null | transformers | 18,379 | Entry not found |
aodiniz/bert_uncased_L-4_H-768_A-12_squad2 | 4f9b6235411866d852dd2f0381de21b1fa5259dd | 2021-05-18T23:57:28.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-4_H-768_A-12_squad2 | 4 | null | transformers | 18,380 | Entry not found |
aodiniz/bert_uncased_L-6_H-128_A-2_cord19-200616_squad2 | 9ef5f510b5c46d1ecef4a89947ac324da755085d | 2021-05-18T23:59:12.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-6_H-128_A-2_cord19-200616_squad2 | 4 | null | transformers | 18,381 | Entry not found |
arampacha/wav2vec2-xls-r-1b-uk-cv | c6e4bb36f3017527d01c1cd23de279acf4270bef | 2022-03-23T18:30:15.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"uk",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | arampacha | null | arampacha/wav2vec2-xls-r-1b-uk-cv | 4 | null | transformers | 18,382 | ---
language:
- uk
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-1b-hy-cv
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice uk
args: uk
metrics:
- type: wer
value: 12.246920571994902
name: WER LM
- type: cer
value: 2.513653497966816
name: CER LM
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: uk
metrics:
- name: Test WER
type: wer
value: 46.56
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: uk
metrics:
- name: Test WER
type: wer
value: 35.98
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1747
- Wer: 0.2107
- Cer: 0.0408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.3719 | 4.35 | 500 | 0.3389 | 0.4236 | 0.0833 |
| 1.1361 | 8.7 | 1000 | 0.2309 | 0.3162 | 0.0630 |
| 1.0517 | 13.04 | 1500 | 0.2166 | 0.3056 | 0.0597 |
| 1.0118 | 17.39 | 2000 | 0.2141 | 0.2784 | 0.0557 |
| 0.9922 | 21.74 | 2500 | 0.2231 | 0.2941 | 0.0594 |
| 0.9929 | 26.09 | 3000 | 0.2171 | 0.2892 | 0.0587 |
| 0.9485 | 30.43 | 3500 | 0.2236 | 0.2956 | 0.0599 |
| 0.9573 | 34.78 | 4000 | 0.2314 | 0.3043 | 0.0616 |
| 0.9195 | 39.13 | 4500 | 0.2169 | 0.2812 | 0.0580 |
| 0.8915 | 43.48 | 5000 | 0.2109 | 0.2780 | 0.0560 |
| 0.8449 | 47.83 | 5500 | 0.2050 | 0.2534 | 0.0514 |
| 0.8028 | 52.17 | 6000 | 0.2032 | 0.2456 | 0.0492 |
| 0.7881 | 56.52 | 6500 | 0.1890 | 0.2380 | 0.0469 |
| 0.7423 | 60.87 | 7000 | 0.1816 | 0.2245 | 0.0442 |
| 0.7248 | 65.22 | 7500 | 0.1789 | 0.2165 | 0.0422 |
| 0.6993 | 69.57 | 8000 | 0.1747 | 0.2107 | 0.0408 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
arampacha/wav2vec2-xls-r-300m-hy | c7bd96170804f04d78ad80b0379351ba83dc6d59 | 2022-03-24T11:51:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"hy-AM",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hy",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | arampacha | null | arampacha/wav2vec2-xls-r-300m-hy | 4 | null | transformers | 18,383 | ---
language:
- hy-AM
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- hy
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-hy
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice hy-AM
args: hy-AM
metrics:
- type: wer
value: 13.192818110850899
name: WER LM
- type: cer
value: 2.787051087506323
name: CER LM
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: hy
metrics:
- name: Test WER
type: wer
value: 22.246048764990867
- name: Test CER
type: cer
value: 7.59406739840239
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the /WORKSPACE/DATA/HY/NOIZY_STUDENT_3/ - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2293
- Wer: 0.3333
- Cer: 0.0602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 842
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.1471 | 7.02 | 400 | 3.1599 | 1.0 | 1.0 |
| 1.8691 | 14.04 | 800 | 0.7674 | 0.7361 | 0.1686 |
| 1.3227 | 21.05 | 1200 | 0.3849 | 0.5336 | 0.1007 |
| 1.163 | 28.07 | 1600 | 0.3015 | 0.4559 | 0.0823 |
| 1.0768 | 35.09 | 2000 | 0.2721 | 0.4032 | 0.0728 |
| 1.0224 | 42.11 | 2400 | 0.2586 | 0.3825 | 0.0691 |
| 0.9817 | 49.12 | 2800 | 0.2458 | 0.3653 | 0.0653 |
| 0.941 | 56.14 | 3200 | 0.2306 | 0.3388 | 0.0605 |
| 0.9235 | 63.16 | 3600 | 0.2315 | 0.3380 | 0.0615 |
| 0.9141 | 70.18 | 4000 | 0.2293 | 0.3333 | 0.0602 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
ardauzunoglu/c_ovk | f16dca476d5e2ddb5c57927428314589c7c21005 | 2022-02-08T17:43:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | ardauzunoglu | null | ardauzunoglu/c_ovk | 4 | 1 | transformers | 18,384 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: c_ovk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c_ovk
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2516
- Accuracy: 0.9249
- F1: 0.9044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4038 | 1.0 | 2462 | 0.2424 | 0.9117 | 0.8848 |
| 0.2041 | 2.0 | 4924 | 0.2323 | 0.9230 | 0.9028 |
| 0.1589 | 3.0 | 7386 | 0.2516 | 0.9249 | 0.9044 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ardauzunoglu/mlm-ovk | 8ae91d7d5f0683569a6fc0e3b9f6da755d0a75d8 | 2022-02-09T20:02:30.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ardauzunoglu | null | ardauzunoglu/mlm-ovk | 4 | 1 | transformers | 18,385 | 'epoch'=3
'eval_loss': 0.2554474174976349
'loss': 0.3272
'train_loss': 0.6488333813678379
'Perplexity': 1.29 |
ardauzunoglu/sentence-relatedness | 4358e47935270630c027c58b018dedbf67b109d1 | 2022-02-10T22:28:52.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | ardauzunoglu | null | ardauzunoglu/sentence-relatedness | 4 | 1 | sentence-transformers | 18,386 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# ardauzunoglu/sentence-relatedness
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ardauzunoglu/sentence-relatedness')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ardauzunoglu/sentence-relatedness)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1064 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
aretw0/t5-small-finetuned-en-to-ro-dataset_20 | 0cbf5668aa289f931f10aa211a4d6f2b95490a47 | 2021-12-03T00:48:42.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | aretw0 | null | aretw0/t5-small-finetuned-en-to-ro-dataset_20 | 4 | null | transformers | 18,387 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-dataset_20
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3293
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-dataset_20
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4052
- Bleu: 7.3293
- Gen Len: 18.2556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6029 | 1.0 | 7629 | 1.4052 | 7.3293 | 18.2556 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aripo99/dummy_model | 0c5af9342496651de1897d01e44df4aa5ba102f9 | 2021-07-02T01:28:52.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | aripo99 | null | aripo99/dummy_model | 4 | null | transformers | 18,388 | Entry not found |
arjunth2001/priv_qna | 5493723c158c0e3166a859a87301ad6643eaa744 | 2021-10-07T02:48:20.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | arjunth2001 | null | arjunth2001/priv_qna | 4 | null | transformers | 18,389 | Entry not found |
arnolfokam/roberta-base-pcm | 74c3c92e8cfe7e6d252fc58408e71e693cde9a76 | 2021-11-24T21:18:39.000Z | [
"pytorch",
"roberta",
"token-classification",
"pcm",
"dataset:masakhaner",
"transformers",
"NER",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | arnolfokam | null | arnolfokam/roberta-base-pcm | 4 | null | transformers | 18,390 | ---
language:
- pcm
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
---
# Model description
**roberta-base-pcm** is a model based on the fine-tuned RoBERTa base model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Nigerian Pidgin corpus **(pcm)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(pcm)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**roberta-base-pcm**| 88.55 | 82.45 | 85.39
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/roberta-base-pcm")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/roberta-base-pcm")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
ner_results = nlp(example)
print(ner_results)
``` |
lmqg/t5-large-squad-default | 88762565b4fdf52c2e687c432c9bef60fd2f275c | 2022-06-01T00:24:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:squad",
"transformers",
"question generation",
"question answer generation",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-squad-default | 4 | null | transformers | 18,391 | ---
language:
- en
tags:
- question generation
- question answer generation
license: mit
datasets:
- squad
metrics:
- bleu
- meteor
- rouge
widget:
- text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Example 1"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Example 2"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Example 3"
---
# T5 finetuned on Question Generation
T5 model for question generation. Please visit [our repository](https://github.com/asahi417/t5-question-generation) for more detail. |
asakawa/wav2vec2-base-demo-colab | 2d5143e95ff19a1fae8c057229633158adef8761 | 2022-01-11T16:22:43.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | asakawa | null | asakawa/wav2vec2-base-demo-colab | 4 | null | transformers | 18,392 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4500
- Wer: 0.3391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5329 | 4.0 | 500 | 1.5741 | 1.0400 |
| 0.6432 | 8.0 | 1000 | 0.4571 | 0.4418 |
| 0.2214 | 12.0 | 1500 | 0.4381 | 0.3823 |
| 0.1294 | 16.0 | 2000 | 0.4706 | 0.3911 |
| 0.0868 | 20.0 | 2500 | 0.5252 | 0.3662 |
| 0.0616 | 24.0 | 3000 | 0.4828 | 0.3458 |
| 0.0461 | 28.0 | 3500 | 0.4500 | 0.3391 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
asapp/sew-d-base-plus-100k | d5c36985b805ac614e8abb710d6a06806949e87c | 2021-10-28T13:48:40.000Z | [
"pytorch",
"sew-d",
"feature-extraction",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | asapp | null | asapp/sew-d-base-plus-100k | 4 | null | transformers | 18,393 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-base+
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
ashish-chouhan/xlm-roberta-base-finetuned-marc | 3dd9c44de7d474e11c844a6d186729a001a9827b | 2021-10-16T11:34:29.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | ashish-chouhan | null | ashish-chouhan/xlm-roberta-base-finetuned-marc | 4 | null | transformers | 18,394 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0171
- Mae: 0.5310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1404 | 1.0 | 308 | 1.0720 | 0.5398 |
| 0.9805 | 2.0 | 616 | 1.0171 | 0.5310 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ashraq/dv-electra-small | 78f798b702c476a057cb9c6a06927bd52311369a | 2021-11-03T22:53:52.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ashraq | null | ashraq/dv-electra-small | 4 | 1 | transformers | 18,395 | Entry not found |
ashraq/tsdae-bert-base-dv-news-title | 5972d48ef54ac7d047659a8abc9a897dbb4edefd | 2021-12-07T20:06:24.000Z | [
"pytorch",
"bert",
"feature-extraction",
"dv",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ashraq | null | ashraq/tsdae-bert-base-dv-news-title | 4 | 1 | sentence-transformers | 18,396 | ---
language:
- dv
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Dhivehi TSDAE News BERT
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ashraq/tsdae-bert-base-dv-news-title')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ashraq/tsdae-bert-base-dv-news-title')
model = AutoModel.from_pretrained('ashraq/tsdae-bert-base-dv-news-title')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7331 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.00024
},
"scheduler": "constantlr",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ashwani-tanwar/Indo-Aryan-XLM-R-Base | 588fce24b07a2be9233cff6110bb84a18f7eecd3 | 2020-12-12T02:52:59.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"gu",
"hi",
"mr",
"bn",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ashwani-tanwar | null | ashwani-tanwar/Indo-Aryan-XLM-R-Base | 4 | null | transformers | 18,397 | ---
language:
- gu
- hi
- mr
- bn
---
# Indo-Aryan-XLM-R-Base
This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) (XLM-R) using its base variant with the Hindi, Gujarati, Marathi, and Bengali languages from the Indo-Aryan family using the [OSCAR](https://oscar-corpus.com/) monolingual datasets. As these languages had imbalanced datasets, we used resampling strategies as used in pretraining the XLM-R to balance the resulting dataset after combining these languages. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.
## Dataset
OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets.
## Preprocessing and Training Procedure
Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure.
## Usage
- This model can be used for further finetuning for different NLP tasks using the Hindi, Gujarati, Marathi, and Bengali languages.
- It can be used to generate contextualised word representations for the words from the above languages.
- It can be used for domain adaptation.
- It can be used to predict the missing words from their sentences.
## Demo
### Using the model to predict missing words
```
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ashwani-tanwar/Indo-Aryan-XLM-R-Base')
pred_word = unmasker("અમદાવાદ એ ગુજરાતનું એક <mask> છે.")
print(pred_word)
```
```
[{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક શહેર છે.</s>', 'score': 0.7811868786811829, 'token': 85227, 'token_str': '▁શહેર'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક ગામ છે.</s>', 'score': 0.055032357573509216, 'token': 66346, 'token_str': '▁ગામ'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક નામ છે.</s>', 'score': 0.0287721399217844, 'token': 29565, 'token_str': '▁નામ'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક રાજ્ય છે.</s>', 'score': 0.02565067447721958, 'token': 63678, 'token_str': '▁રાજ્ય'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એકનગર છે.</s>', 'score': 0.022877279669046402, 'token': 69702, 'token_str': 'નગર'}]
```
### Using the model to generate contextualised word representations
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Indo-Aryan-XLM-R-Base")
model = AutoModel.from_pretrained("ashwani-tanwar/Indo-Aryan-XLM-R-Base")
sentence = "અમદાવાદ એ ગુજરાતનું એક શહેર છે."
encoded_sentence = tokenizer(sentence, return_tensors='pt')
context_word_rep = model(**encoded_sentence)
```
|
astremo/friendly_JA | 4b2704a7c3c73535d61ee7b2b80b4bb55bf8b576 | 2022-05-22T14:57:21.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ja",
"dataset:astremo/friendly_JA_corpus",
"transformers",
"japanese",
"easy-japanese",
"friendly-japanese",
"sino-japanese",
"katakana",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | astremo | null | astremo/friendly_JA | 4 | 2 | transformers | 18,398 | ---
language:
- ja
license: cc-by-4.0
tags:
- japanese
- easy-japanese
- friendly-japanese
- sino-japanese
- katakana
datasets:
- astremo/friendly_JA_corpus
metrics:
- bleu
---
# friendly_JA-Model (T5 fine-tuned model)
MT model trained using the friendly_JA Corpus attempting to make Japanese easier/more accessible to occidental people by using the Latin/English derived katakana lexicon instead of the standard Sino-Japanese lexicon
# Examples
| input | output|
|---|---|
|最適化を応用した機械翻訳モデルは高精度だ|オプティマイゼーションを応用したマシントランスレーションモデルは高いアキュラシーだ|
|彼は架空の世界に住んでいる|彼はイマジナリー世界に住んでいる|
|新型コロナウイルスに感染してしまった|コロナウイルスにかかってしまった|
|深層学習は難しい|ディープラーニングはむずかしい|
|新たな概念を紹介する|新しいコンセプトを紹介する|
|津波の警報が流れた|ツナミのアラートが流れた|
|南海トラフの災害は震源地による|南海トラフのディザスターはエピセンターによる|
|息子は際どい内容の本を読んでしまった|子どもはセンシティブなコンテンツの本を読んでしまった|
|彼女は非現金決済で払った|彼女はキャッシュレスで払った|
|係員は会議の予定を調整している|担当の人はアジェンダを調整している|
|友人とカラオケに行く予定があったが、彼女はどうしても美術館に行きたかった|友だちとカラオケに行くスケジュールがあったが、彼女はどうしてもミュージアムに行きたかった|
|国際会議に参加しました|インターナショナルコンファレンスに参加しました|
|部長は今日の会議に参加できかねました|部長は今日のミーティングに参加できませんでした。|
|新型コロナウイルスの予防接種による心膜炎が多数報告されている|コロナウイルスのワクチンによるペリカーダイティスがレポートされている|
|私はジョジョの奇妙な冒険が好き|私はジョジョのビザールアドベンチャーが好き|
|新型コロナウイルスウイルス オミクロン株 1人死亡 8249人感染|コロナウイルス オミクロンバリアント 1人死んだ 8249人インフェクション|
|2021年10月4日から岸田文雄は日本の総理大臣として勤めている|2021年10月4日から岸田文雄は日本のプライムミニスターとして働いている|
# References
t5 japanese pre-trained model: sonoisa t5-base-japanese (https://huggingface.co/sonoisa/t5-base-japanese)
# License
Shield: [![CC BY 4.0][cc-by-shield]][cc-by]
This work is licensed under a
[Creative Commons Attribution 4.0 International License][cc-by].
[![CC BY 4.0][cc-by-image]][cc-by]
[cc-by]: http://creativecommons.org/licenses/by/4.0/
[cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png
[cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg
|
avneet/distilbert-base-uncased-finetuned-sst2 | c84c214e2f581d66857e65b0c002ba3d4db93638 | 2021-08-02T16:33:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | false | avneet | null | avneet/distilbert-base-uncased-finetuned-sst2 | 4 | null | transformers | 18,399 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metric:
name: Accuracy
type: accuracy
value: 0.9151376146788991
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3651
- Accuracy: 0.9151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1902 | 1.0 | 4210 | 0.3102 | 0.9117 |
| 0.1293 | 2.0 | 8420 | 0.3672 | 0.9048 |
| 0.084 | 3.0 | 12630 | 0.3651 | 0.9151 |
| 0.0682 | 4.0 | 16840 | 0.3971 | 0.9037 |
| 0.0438 | 5.0 | 21050 | 0.4720 | 0.9117 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.