modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
KFlash/bert-finetuned-squad-accelerate | 0dac3a32c91a90958186430695443f35ed72f802 | 2022-06-02T16:14:58.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | KFlash | null | KFlash/bert-finetuned-squad-accelerate | 1 | null | transformers | 32,500 | Entry not found |
neelan-elucidate-ai/baseline | 9681bc925d7e7dc08252ed964b6de0819c4e9f95 | 2022-05-30T06:45:05.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | neelan-elucidate-ai | null | neelan-elucidate-ai/baseline | 1 | null | transformers | 32,501 | ---
language:
- ab
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 207.6048
- Wer: 1.5484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
theojolliffe/bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3-arxiv2o3 | e5597f83d6f7333ed212c24921be53daa86f54f7 | 2022-05-30T07:31:14.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:scientific_papers",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3-arxiv2o3 | 1 | null | transformers | 32,502 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3-arxiv2o3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: scientific_papers
type: scientific_papers
args: arxiv
metrics:
- name: Rouge1
type: rouge
value: 41.9656
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3-arxiv2o3
This model is a fine-tuned version of [theojolliffe/bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3](https://huggingface.co/theojolliffe/bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1265
- Rouge1: 41.9656
- Rouge2: 15.3793
- Rougel: 24.0382
- Rougelsum: 37.6057
- Gen Len: 130.8531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.1485 | 1.0 | 33840 | 2.1265 | 41.9656 | 15.3793 | 24.0382 | 37.6057 | 130.8531 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Splend1dchan/wav2vec2-large-lv60_t5lephone-small_nofreeze_bs16_forMINDS.en.all2 | 0f01637c7fc217052423f3c91be2de1f6e10a6d2 | 2022-05-30T07:38:51.000Z | [
"pytorch",
"speechmix",
"transformers"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_t5lephone-small_nofreeze_bs16_forMINDS.en.all2 | 1 | null | transformers | 32,503 | wav2vec2 -> t5lephone
bs = 16
dropout = 0.3
performance : 29%
{
"architectures": [
"SpeechMixEEDT5"
],
"decoder": {
"_name_or_path": "voidful/phoneme_byt5",
"add_cross_attention": true,
"architectures": [
"T5ForConditionalGeneration"
],
"bad_words_ids": null,
"bos_token_id": null,
"chunk_size_feed_forward": 0,
"cross_attention_hidden_size": null,
"d_ff": 3584,
"d_kv": 64,
"d_model": 1472,
"decoder_start_token_id": 0,
"diversity_penalty": 0.0,
"do_sample": false,
"dropout_rate": 0.1,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"gradient_checkpointing": false,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_factor": 1.0,
"is_decoder": true,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_epsilon": 1e-06,
"length_penalty": 1.0,
"max_length": 20,
"min_length": 0,
"model_type": "t5",
"no_repeat_ngram_size": 0,
"num_beam_groups": 1,
"num_beams": 1,
"num_decoder_layers": 4,
"num_heads": 6,
"num_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": 0,
"prefix": null,
"problem_type": null,
"pruned_heads": {},
"relative_attention_max_distance": 128,
"relative_attention_num_buckets": 32,
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": false,
"tokenizer_class": "ByT5Tokenizer",
"top_k": 50,
"top_p": 1.0,
"torch_dtype": "float32",
"torchscript": false,
"transformers_version": "4.17.0",
"typical_p": 1.0,
"use_bfloat16": false,
"use_cache": true,
"vocab_size": 384
},
"encoder": {
"_name_or_path": "facebook/wav2vec2-large-lv60",
"activation_dropout": 0.1,
"adapter_kernel_size": 3,
"adapter_stride": 2,
"add_adapter": false,
"add_cross_attention": false,
"apply_spec_augment": true,
"architectures": [
"Wav2Vec2ForPreTraining"
],
"attention_dropout": 0.1,
"bad_words_ids": null,
"bos_token_id": 1,
"chunk_size_feed_forward": 0,
"classifier_proj_size": 256,
"codevector_dim": 768,
"contrastive_logits_temperature": 0.1,
"conv_bias": true,
"conv_dim": [
512,
512,
512,
512,
512,
512,
512
],
"conv_kernel": [
10,
3,
3,
3,
3,
2,
2
],
"conv_stride": [
5,
2,
2,
2,
2,
2,
2
],
"cross_attention_hidden_size": null,
"ctc_loss_reduction": "sum",
"ctc_zero_infinity": false,
"decoder_start_token_id": null,
"diversity_loss_weight": 0.1,
"diversity_penalty": 0.0,
"do_sample": false,
"do_stable_layer_norm": true,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": 2,
"feat_extract_activation": "gelu",
"feat_extract_dropout": 0.0,
"feat_extract_norm": "layer",
"feat_proj_dropout": 0.1,
"feat_quantizer_dropout": 0.0,
"final_dropout": 0.1,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout": 0.1,
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 4096,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"layerdrop": 0.0,
"length_penalty": 1.0,
"mask_feature_length": 10,
"mask_feature_min_masks": 0,
"mask_feature_prob": 0.0,
"mask_time_length": 10,
"mask_time_min_masks": 2,
"mask_time_prob": 0.05,
"max_length": 20,
"min_length": 0,
"model_type": "wav2vec2",
"no_repeat_ngram_size": 0,
"num_adapter_layers": 3,
"num_attention_heads": 16,
"num_beam_groups": 1,
"num_beams": 1,
"num_codevector_groups": 2,
"num_codevectors_per_group": 320,
"num_conv_pos_embedding_groups": 16,
"num_conv_pos_embeddings": 128,
"num_feat_extract_layers": 7,
"num_hidden_layers": 24,
"num_negatives": 100,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_size": 1024,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": 0,
"prefix": null,
"problem_type": null,
"proj_codevector_dim": 768,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"task_specific_params": null,
"tdnn_dilation": [
1,
2,
3,
1,
1
],
"tdnn_dim": [
512,
512,
512,
512,
1500
],
"tdnn_kernel": [
5,
3,
3,
1,
1
],
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"transformers_version": "4.17.0",
"typical_p": 1.0,
"use_bfloat16": false,
"use_weighted_layer_sum": false,
"vocab_size": 32,
"xvector_output_dim": 512
},
"is_encoder_decoder": true,
"model_type": "speechmix",
"torch_dtype": "float32",
"transformers_version": null
}
|
cwchengtw/wav2vec2-large-xls-r-300m-turkish-colab | 5158323a10fb3930a9b86b3b538b9b41a76c804e | 2022-05-30T07:11:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cwchengtw | null | cwchengtw/wav2vec2-large-xls-r-300m-turkish-colab | 1 | null | transformers | 32,504 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3873
- Wer: 0.3224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.0846 | 3.67 | 400 | 0.7488 | 0.7702 |
| 0.4487 | 7.34 | 800 | 0.4428 | 0.5255 |
| 0.1926 | 11.01 | 1200 | 0.4218 | 0.4667 |
| 0.1302 | 14.68 | 1600 | 0.3957 | 0.4269 |
| 0.0989 | 18.35 | 2000 | 0.4321 | 0.4085 |
| 0.0748 | 22.02 | 2400 | 0.4067 | 0.3904 |
| 0.0615 | 25.69 | 2800 | 0.3914 | 0.3557 |
| 0.0485 | 29.36 | 3200 | 0.3873 | 0.3224 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
stevemobs/deberta-base-combined-squad1-aqa-1epoch-and-newsqa-2epoch | 78389ddf96e600333a5d436a9ef5582724c58dd1 | 2022-05-30T07:04:49.000Z | [
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | stevemobs | null | stevemobs/deberta-base-combined-squad1-aqa-1epoch-and-newsqa-2epoch | 1 | null | transformers | 32,505 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta-base-combined-squad1-aqa-1epoch-and-newsqa-2epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-combined-squad1-aqa-1epoch-and-newsqa-2epoch
This model is a fine-tuned version of [stevemobs/deberta-base-combined-squad1-aqa-1epoch](https://huggingface.co/stevemobs/deberta-base-combined-squad1-aqa-1epoch) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.6693 | 1.0 | 17307 | 0.7171 |
| 0.4723 | 2.0 | 34614 | 0.7521 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AbhilashDatta/T5_qgen-squad-marco | 884ef88563d3d84508907056317e02075005f18f | 2022-05-30T05:52:35.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | AbhilashDatta | null | AbhilashDatta/T5_qgen-squad-marco | 1 | null | transformers | 32,506 | ---
license: afl-3.0
---
# Question generation using T5 transformer
<h2> <i>Input format: context: "..." answer: "..." </i></h2>
Import the pretrained model as well as tokenizer:
```
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained('AbhilashDatta/T5_qgen-squad-marco')
tokenizer = T5Tokenizer.from_pretrained('AbhilashDatta/T5_qgen-squad-marco')
```
Then use the tokenizer to encode/decode and model to generate:
```
input = "context: My name is Abhilash Datta. answer: Abhilash"
batch = tokenizer(input, padding='longest', max_length=512, return_tensors='pt')
inputs_batch = batch['input_ids'][0]
inputs_batch = torch.unsqueeze(inputs_batch, 0)
ques_id = model.generate(inputs_batch, max_length=100, early_stopping=True)
ques_batch = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in ques_id]
print(ques_batch)
```
Output:
```
['what is my name']
``` |
cwchengtw/wav2vec2-large-xls-r-300m-turkish-colab2 | ba3e6035bbfc4813fa96ea1229f44e729fca4483 | 2022-05-31T00:51:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cwchengtw | null | cwchengtw/wav2vec2-large-xls-r-300m-turkish-colab2 | 1 | null | transformers | 32,507 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3738
- Wer: 0.3532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9022 | 3.7 | 400 | 0.6778 | 0.7414 |
| 0.4106 | 7.4 | 800 | 0.4123 | 0.5049 |
| 0.1862 | 11.11 | 1200 | 0.4260 | 0.4232 |
| 0.1342 | 14.81 | 1600 | 0.3951 | 0.4097 |
| 0.0997 | 18.51 | 2000 | 0.4100 | 0.3999 |
| 0.0782 | 22.22 | 2400 | 0.3918 | 0.3875 |
| 0.059 | 25.92 | 2800 | 0.3803 | 0.3698 |
| 0.0474 | 29.63 | 3200 | 0.3738 | 0.3532 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ruselkomp/deeppavlov-framebank-50size | da1d54ed87555d26af603ee3c7068a46b51ccf45 | 2022-05-30T14:11:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | ruselkomp | null | ruselkomp/deeppavlov-framebank-50size | 1 | null | transformers | 32,508 | ---
tags:
- generated_from_trainer
model-index:
- name: deeppavlov-framebank-50size
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deeppavlov-framebank-50size
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1007
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0733 | 1.0 | 2827 | 1.0076 |
| 0.7875 | 2.0 | 5654 | 1.0309 |
| 0.6003 | 3.0 | 8481 | 1.1007 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
Aktsvigun/bert-base-cnndm | 2b45217070f7098b3007358636a2082cda9d0da4 | 2022-05-30T10:54:53.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Aktsvigun | null | Aktsvigun/bert-base-cnndm | 1 | null | transformers | 32,509 | Entry not found |
y05uk/wav2vec2-base-timit-demo-google-colab | 3682a11fbbfa342aeeeefd26648df677d2c9ebe1 | 2022-05-30T13:32:00.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | y05uk | null | y05uk/wav2vec2-base-timit-demo-google-colab | 1 | null | transformers | 32,510 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5353
- Wer: 0.3360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5345 | 1.0 | 500 | 1.8229 | 0.9810 |
| 0.8731 | 2.01 | 1000 | 0.5186 | 0.5165 |
| 0.4455 | 3.01 | 1500 | 0.4386 | 0.4572 |
| 0.3054 | 4.02 | 2000 | 0.4396 | 0.4286 |
| 0.2354 | 5.02 | 2500 | 0.4454 | 0.4051 |
| 0.1897 | 6.02 | 3000 | 0.4465 | 0.3925 |
| 0.1605 | 7.03 | 3500 | 0.4776 | 0.3974 |
| 0.1413 | 8.03 | 4000 | 0.5254 | 0.4062 |
| 0.1211 | 9.04 | 4500 | 0.5123 | 0.3913 |
| 0.1095 | 10.04 | 5000 | 0.4171 | 0.3711 |
| 0.1039 | 11.04 | 5500 | 0.4258 | 0.3732 |
| 0.0932 | 12.05 | 6000 | 0.4879 | 0.3701 |
| 0.0867 | 13.05 | 6500 | 0.4725 | 0.3637 |
| 0.0764 | 14.06 | 7000 | 0.5041 | 0.3636 |
| 0.0661 | 15.06 | 7500 | 0.4692 | 0.3646 |
| 0.0647 | 16.06 | 8000 | 0.4804 | 0.3612 |
| 0.0576 | 17.07 | 8500 | 0.5545 | 0.3628 |
| 0.0577 | 18.07 | 9000 | 0.5004 | 0.3557 |
| 0.0481 | 19.08 | 9500 | 0.5341 | 0.3558 |
| 0.0466 | 20.08 | 10000 | 0.5056 | 0.3514 |
| 0.0433 | 21.08 | 10500 | 0.4864 | 0.3481 |
| 0.0362 | 22.09 | 11000 | 0.4994 | 0.3473 |
| 0.0325 | 23.09 | 11500 | 0.5327 | 0.3446 |
| 0.0351 | 24.1 | 12000 | 0.5360 | 0.3445 |
| 0.0284 | 25.1 | 12500 | 0.5085 | 0.3399 |
| 0.027 | 26.1 | 13000 | 0.5344 | 0.3426 |
| 0.0247 | 27.11 | 13500 | 0.5310 | 0.3357 |
| 0.0251 | 28.11 | 14000 | 0.5201 | 0.3355 |
| 0.0228 | 29.12 | 14500 | 0.5353 | 0.3360 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
tclong/wav2vec2-base-vios | f6ce56489d6637b7626ae5543752ed86ee406f37 | 2022-06-04T16:09:51.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:vivos_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | tclong | null | tclong/wav2vec2-base-vios | 1 | null | transformers | 32,511 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- vivos_dataset
model-index:
- name: wav2vec2-base-vios
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-vios
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the vivos_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3729
- Wer: 0.2427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.4755 | 1.37 | 500 | 0.7991 | 0.5957 |
| 0.5424 | 2.75 | 1000 | 0.4290 | 0.3653 |
| 0.3586 | 4.12 | 1500 | 0.3809 | 0.2890 |
| 0.2824 | 5.49 | 2000 | 0.3808 | 0.2749 |
| 0.2249 | 6.87 | 2500 | 0.3467 | 0.2389 |
| 0.1745 | 8.24 | 3000 | 0.3688 | 0.2384 |
| 0.1459 | 9.61 | 3500 | 0.3729 | 0.2427 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Aktsvigun/bert-base-pubmed | 6b737c3fd72a62f274c8a602262594976101231a | 2022-05-30T14:13:08.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Aktsvigun | null | Aktsvigun/bert-base-pubmed | 1 | null | transformers | 32,512 | Entry not found |
ruselkomp/sber-framebank-50size | aadf535b6d6c45528bc907353e8528cde8ef9ccd | 2022-05-31T05:01:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ruselkomp | null | ruselkomp/sber-framebank-50size | 1 | null | transformers | 32,513 | Entry not found |
eslamxm/mT5_multilingual_XLSum-finetuned-en-cnn | 78113aef7a97ad7705b62b44f241c49802edfdb1 | 2022-06-01T18:42:31.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | eslamxm | null | eslamxm/mT5_multilingual_XLSum-finetuned-en-cnn | 1 | null | transformers | 32,514 | Entry not found |
Jiexing/sparc_add_coref_t5_3b_order_0514_ckpt-4224 | 653b47845f8e9735562425db7646a9abaacec60c | 2022-05-30T15:38:19.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Jiexing | null | Jiexing/sparc_add_coref_t5_3b_order_0514_ckpt-4224 | 1 | null | transformers | 32,515 | Entry not found |
nadiaqutaiba/bert-base-uncased-finetuned-swag | 74f25c20049608bed50d49f1ffe95003b196ca3b | 2022-05-30T21:54:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"dataset:swag",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | nadiaqutaiba | null | nadiaqutaiba/bert-base-uncased-finetuned-swag | 1 | null | transformers | 32,516 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Damith/mwe-xlm-roberta-base | 113bea9635590d9d285d3de6a731f5d8472ad22b | 2022-05-30T15:45:05.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | Damith | null | Damith/mwe-xlm-roberta-base | 1 | null | transformers | 32,517 | ---
license: apache-2.0
---
|
theojolliffe/bart-cnn-science-v3-e1 | 8add11a8812303d45021313de8161676b7ad96c1 | 2022-05-30T18:32:12.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-cnn-science-v3-e1 | 1 | null | transformers | 32,518 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart-cnn-science-v3-e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-science-v3-e1
This model is a fine-tuned version of [theojolliffe/bart-cnn-science](https://huggingface.co/theojolliffe/bart-cnn-science) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 1.0643 | 51.6454 | 31.8213 | 33.7711 | 49.3471 | 141.5926 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
stevemobs/deberta-base-combined-squad1-aqa-newsqa-50 | bbe2d7df4531eb8d1965b895171d1d369b271ffd | 2022-05-30T23:05:53.000Z | [
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | stevemobs | null | stevemobs/deberta-base-combined-squad1-aqa-newsqa-50 | 1 | null | transformers | 32,519 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta-base-combined-squad1-aqa-newsqa-50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-combined-squad1-aqa-newsqa-50
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9401 | 1.0 | 18532 | 0.8266 |
| 0.6811 | 2.0 | 37064 | 0.7756 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Adapting/t5-small-finetuned-xsum | 9abb1b724ad7fee458c615130ce1cdf2947419f3 | 2022-05-31T08:31:11.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Adapting | null | Adapting/t5-small-finetuned-xsum | 1 | null | transformers | 32,520 | Entry not found |
haritzpuerto/MiniLM-L12-H384-uncased-squad | 6097056a8e564ae8d0c0897615f113027a50848e | 2022-06-05T12:25:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | haritzpuerto | null | haritzpuerto/MiniLM-L12-H384-uncased-squad | 1 | null | transformers | 32,521 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- exact_match: 77.57805108798486
- f1: 85.73943867549627
- Loss: 1.0744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1738 | 1.0 | 5475 | 1.0744 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
haritzpuerto/xtremedistil-squad | 5fd914007ef9ada5c9eb7f84af46fd461d28ed95 | 2022-05-30T21:36:34.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | haritzpuerto | null | haritzpuerto/xtremedistil-squad | 1 | null | transformers | 32,522 | Entry not found |
stevemobs/deberta-base-combined-squad1-aqa-newsqa-50-and-newsqa-50 | be4f7a0fcb48cdc0ea0e2bfdb233d5a45909a6ad | 2022-05-31T03:31:35.000Z | [
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | stevemobs | null | stevemobs/deberta-base-combined-squad1-aqa-newsqa-50-and-newsqa-50 | 1 | null | transformers | 32,523 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta-base-combined-squad1-aqa-newsqa-50-and-newsqa-50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-combined-squad1-aqa-newsqa-50-and-newsqa-50
This model is a fine-tuned version of [stevemobs/deberta-base-combined-squad1-aqa-newsqa-50](https://huggingface.co/stevemobs/deberta-base-combined-squad1-aqa-newsqa-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.6957 | 1.0 | 8681 | 0.5072 |
| 0.4264 | 2.0 | 17362 | 0.4881 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
eabayed/wav2vec2emiratidialict_1 | a589e3e60ff482ade837623f52643c6fe385b5a1 | 2022-05-31T02:57:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"license:gpl-3.0"
] | automatic-speech-recognition | false | eabayed | null | eabayed/wav2vec2emiratidialict_1 | 1 | null | transformers | 32,524 | ---
license: gpl-3.0
---
Wav2vec2 model trained with audio clips from Arabic shows using the Emirati dialect. |
N0NAne/DialoGPT-small-harrypotter | c29032a0973c5daa9c0fbc9420cdf16490bb386f | 2022-05-31T05:51:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | N0NAne | null | N0NAne/DialoGPT-small-harrypotter | 1 | null | transformers | 32,525 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
hunkim/sentence-transformersklue-bert-base | b899b71522dd0f8ea8c3f68c3d3f0be9077534c8 | 2022-05-31T06:39:28.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | hunkim | null | hunkim/sentence-transformersklue-bert-base | 1 | null | sentence-transformers | 32,526 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# hunkim/sentence-transformersklue-bert-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('hunkim/sentence-transformersklue-bert-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hunkim/sentence-transformersklue-bert-base')
model = AutoModel.from_pretrained('hunkim/sentence-transformersklue-bert-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=hunkim/sentence-transformersklue-bert-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 365 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 146,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
chrisvinsen/wav2vec2-15 | d1fb441732dda02c63efac575cbe412722b1d290 | 2022-05-31T11:13:41.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-15 | 1 | null | transformers | 32,527 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-15
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8623
- Wer: 0.8585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.6808 | 1.37 | 200 | 3.7154 | 1.0 |
| 3.0784 | 2.74 | 400 | 3.1542 | 1.0 |
| 2.8919 | 4.11 | 600 | 2.9918 | 1.0 |
| 2.8317 | 5.48 | 800 | 2.8971 | 1.0 |
| 2.7958 | 6.85 | 1000 | 2.8409 | 1.0 |
| 2.7699 | 8.22 | 1200 | 2.8278 | 1.0 |
| 2.6365 | 9.59 | 1400 | 2.4657 | 1.0 |
| 2.1096 | 10.96 | 1600 | 1.8358 | 0.9988 |
| 1.6485 | 12.33 | 1800 | 1.4525 | 0.9847 |
| 1.3967 | 13.7 | 2000 | 1.2467 | 0.9532 |
| 1.2492 | 15.07 | 2200 | 1.1261 | 0.9376 |
| 1.1543 | 16.44 | 2400 | 1.0654 | 0.9194 |
| 1.0863 | 17.81 | 2600 | 1.0136 | 0.9161 |
| 1.0275 | 19.18 | 2800 | 0.9601 | 0.8827 |
| 0.9854 | 20.55 | 3000 | 0.9435 | 0.8878 |
| 0.9528 | 21.92 | 3200 | 0.9170 | 0.8807 |
| 0.926 | 23.29 | 3400 | 0.9121 | 0.8783 |
| 0.9025 | 24.66 | 3600 | 0.8884 | 0.8646 |
| 0.8909 | 26.03 | 3800 | 0.8836 | 0.8690 |
| 0.8717 | 27.4 | 4000 | 0.8810 | 0.8646 |
| 0.8661 | 28.77 | 4200 | 0.8623 | 0.8585 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
creynier/wav2vec2-base-swbd-turn-eos-long_short2s_utt_removed_4percent | 8ded6c2993c3782dc8253ded987111288aa8e601 | 2022-06-01T01:11:15.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-turn-eos-long_short2s_utt_removed_4percent | 1 | null | transformers | 32,528 | Entry not found |
changjin/distilbert-base-uncased-finetuned-squad | 3d5f2d8cfaaed9f31241867424ee935d29b7b567 | 2022-05-31T08:40:38.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | changjin | null | changjin/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 32,529 | Entry not found |
moshew/paraphrase-mpnet-base-v2_SetFit_sst2_nun_training_64 | cf660fa4e0c5bc1462e457c3e97d231ca988bfc2 | 2022-05-31T09:24:16.000Z | [
"pytorch",
"mpnet",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | moshew | null | moshew/paraphrase-mpnet-base-v2_SetFit_sst2_nun_training_64 | 1 | null | sentence-transformers | 32,530 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# moshew/paraphrase-mpnet-base-v2_SetFit_sst2_nun_training_64
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('moshew/paraphrase-mpnet-base-v2_SetFit_sst2_nun_training_64')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('moshew/paraphrase-mpnet-base-v2_SetFit_sst2_nun_training_64')
model = AutoModel.from_pretrained('moshew/paraphrase-mpnet-base-v2_SetFit_sst2_nun_training_64')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=moshew/paraphrase-mpnet-base-v2_SetFit_sst2_nun_training_64)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 160 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Splend1dchan/xtreme_s_w2v2_t5lephone-small_minds14.en-all | 74856bb8f5d4f49a8d4a007004c0b6b0c216d5c7 | 2022-05-31T11:37:17.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"transformers"
] | null | false | Splend1dchan | null | Splend1dchan/xtreme_s_w2v2_t5lephone-small_minds14.en-all | 1 | null | transformers | 32,531 | Entry not found |
mikehemberger/tests | da0621cee932c70ec6772ba97e496ba9b5613346 | 2022-05-31T09:44:42.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers"
] | image-classification | false | mikehemberger | null | mikehemberger/tests | 1 | null | transformers | 32,532 | Entry not found |
chrisvinsen/wav2vec2-16 | ba820a1bc74ca9f96d13e4a483f28560f7b53a83 | 2022-06-01T02:12:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-16 | 1 | null | transformers | 32,533 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-16
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1016
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.6682 | 1.37 | 200 | 3.3138 | 1.0 |
| 2.8751 | 2.74 | 400 | 2.9984 | 1.0 |
| 2.8697 | 4.11 | 600 | 3.0827 | 1.0 |
| 2.866 | 5.48 | 800 | 3.0697 | 1.0 |
| 2.8655 | 6.85 | 1000 | 3.1083 | 1.0 |
| 2.8629 | 8.22 | 1200 | 3.0888 | 1.0 |
| 2.8651 | 9.59 | 1400 | 3.2852 | 1.0 |
| 2.8601 | 10.96 | 1600 | 3.1155 | 1.0 |
| 2.8617 | 12.33 | 1800 | 3.1958 | 1.0 |
| 2.8595 | 13.7 | 2000 | 3.1070 | 1.0 |
| 2.858 | 15.07 | 2200 | 3.1483 | 1.0 |
| 2.8564 | 16.44 | 2400 | 3.0906 | 1.0 |
| 2.8561 | 17.81 | 2600 | 3.1412 | 1.0 |
| 2.8574 | 19.18 | 2800 | 3.0783 | 1.0 |
| 2.8543 | 20.55 | 3000 | 3.0624 | 1.0 |
| 2.8549 | 21.92 | 3200 | 3.0914 | 1.0 |
| 2.8556 | 23.29 | 3400 | 3.0735 | 1.0 |
| 2.8557 | 24.66 | 3600 | 3.1791 | 1.0 |
| 2.8576 | 26.03 | 3800 | 3.0645 | 1.0 |
| 2.8528 | 27.4 | 4000 | 3.1190 | 1.0 |
| 2.8551 | 28.77 | 4200 | 3.1016 | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Splend1dchan/xtreme_s_w2v2_minds14.en-all | 9e69c03342fe2d67ef1cfddaff520e2e39b47eab | 2022-05-31T14:07:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"transformers"
] | null | false | Splend1dchan | null | Splend1dchan/xtreme_s_w2v2_minds14.en-all | 1 | null | transformers | 32,534 | Entry not found |
MeshalAlamr/wav2vec2-xls-r-300m-ar-12 | 8b5784abeee04f5380c474c304c86e8e32ed4ee7 | 2022-06-20T02:48:08.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | MeshalAlamr | null | MeshalAlamr/wav2vec2-xls-r-300m-ar-12 | 1 | null | transformers | 32,535 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-ar-12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-ar-12
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 77.9014
- Wer: 0.1633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 16832.9559 | 1.0 | 85 | 1596.5383 | 1.0 |
| 4748.8934 | 2.0 | 170 | 698.8426 | 1.0 |
| 2939.1952 | 3.0 | 255 | 633.2770 | 1.0 |
| 2833.7857 | 4.0 | 340 | 615.9734 | 1.0 |
| 2778.75 | 5.0 | 425 | 609.7852 | 1.0 |
| 2603.4421 | 6.0 | 510 | 435.0911 | 0.9998 |
| 1420.6594 | 7.0 | 595 | 165.1980 | 0.7542 |
| 811.7357 | 8.0 | 680 | 117.7532 | 0.5570 |
| 582.7924 | 9.0 | 765 | 93.8724 | 0.4447 |
| 469.1885 | 10.0 | 850 | 87.4084 | 0.3961 |
| 399.7348 | 11.0 | 935 | 78.7740 | 0.3562 |
| 348.0169 | 12.0 | 1020 | 72.9545 | 0.3278 |
| 314.0225 | 13.0 | 1105 | 70.8449 | 0.3149 |
| 281.4823 | 14.0 | 1190 | 66.1416 | 0.3013 |
| 263.0267 | 15.0 | 1275 | 66.6624 | 0.2761 |
| 238.7656 | 16.0 | 1360 | 66.3659 | 0.2742 |
| 227.9712 | 17.0 | 1445 | 65.1213 | 0.2616 |
| 209.4785 | 18.0 | 1530 | 66.3502 | 0.2600 |
| 198.6275 | 19.0 | 1615 | 66.7867 | 0.2589 |
| 189.7333 | 20.0 | 1700 | 65.1457 | 0.2499 |
| 183.3984 | 21.0 | 1785 | 68.7480 | 0.2534 |
| 174.6036 | 22.0 | 1870 | 67.8124 | 0.2480 |
| 167.1744 | 23.0 | 1955 | 70.3643 | 0.2438 |
| 160.6194 | 24.0 | 2040 | 68.3434 | 0.2387 |
| 154.096 | 25.0 | 2125 | 69.3449 | 0.2391 |
| 148.2008 | 26.0 | 2210 | 66.6332 | 0.2359 |
| 143.9339 | 27.0 | 2295 | 67.2253 | 0.2292 |
| 143.2862 | 28.0 | 2380 | 68.5232 | 0.2299 |
| 136.5192 | 29.0 | 2465 | 71.3180 | 0.2286 |
| 138.1667 | 30.0 | 2550 | 68.1166 | 0.2241 |
| 129.6961 | 31.0 | 2635 | 69.9885 | 0.2270 |
| 125.0034 | 32.0 | 2720 | 68.5696 | 0.2288 |
| 122.382 | 33.0 | 2805 | 69.9053 | 0.2237 |
| 121.4687 | 34.0 | 2890 | 72.5378 | 0.2325 |
| 121.637 | 35.0 | 2975 | 74.0948 | 0.2302 |
| 114.8182 | 36.0 | 3060 | 71.6004 | 0.2236 |
| 114.9692 | 37.0 | 3145 | 73.0708 | 0.2215 |
| 111.2695 | 38.0 | 3230 | 70.1939 | 0.2172 |
| 109.1332 | 39.0 | 3315 | 73.6910 | 0.2216 |
| 109.5747 | 40.0 | 3400 | 73.0911 | 0.2192 |
| 112.0337 | 41.0 | 3485 | 72.5238 | 0.2285 |
| 102.5452 | 42.0 | 3570 | 73.1730 | 0.2156 |
| 104.4951 | 43.0 | 3655 | 70.9824 | 0.2116 |
| 100.2483 | 44.0 | 3740 | 77.4810 | 0.2141 |
| 100.7275 | 45.0 | 3825 | 70.5330 | 0.2131 |
| 97.4453 | 46.0 | 3910 | 69.3713 | 0.2117 |
| 97.4768 | 47.0 | 3995 | 78.6786 | 0.2150 |
| 97.9564 | 48.0 | 4080 | 74.7395 | 0.2080 |
| 95.7626 | 49.0 | 4165 | 73.5510 | 0.2165 |
| 94.4995 | 50.0 | 4250 | 71.3337 | 0.2152 |
| 92.4394 | 51.0 | 4335 | 74.3506 | 0.2091 |
| 89.1442 | 52.0 | 4420 | 71.3629 | 0.2076 |
| 89.8932 | 53.0 | 4505 | 70.2986 | 0.2119 |
| 88.6913 | 54.0 | 4590 | 71.3645 | 0.2077 |
| 91.1411 | 55.0 | 4675 | 74.9795 | 0.2166 |
| 87.5678 | 56.0 | 4760 | 77.4106 | 0.2081 |
| 83.0826 | 57.0 | 4845 | 75.1502 | 0.2099 |
| 83.7437 | 58.0 | 4930 | 74.9253 | 0.2071 |
| 85.8112 | 59.0 | 5015 | 70.0373 | 0.2067 |
| 81.7675 | 60.0 | 5100 | 76.5425 | 0.2156 |
| 81.6714 | 61.0 | 5185 | 75.3845 | 0.2083 |
| 81.9356 | 62.0 | 5270 | 74.8665 | 0.2069 |
| 77.8237 | 63.0 | 5355 | 74.6538 | 0.2036 |
| 79.3037 | 64.0 | 5440 | 73.3461 | 0.2006 |
| 81.3878 | 65.0 | 5525 | 72.3601 | 0.2022 |
| 77.7095 | 66.0 | 5610 | 72.7715 | 0.2034 |
| 76.6013 | 67.0 | 5695 | 78.5694 | 0.2073 |
| 74.7015 | 68.0 | 5780 | 72.6246 | 0.2032 |
| 76.637 | 69.0 | 5865 | 73.9210 | 0.2095 |
| 74.1983 | 70.0 | 5950 | 75.4212 | 0.1995 |
| 73.328 | 71.0 | 6035 | 76.0840 | 0.1958 |
| 73.2174 | 72.0 | 6120 | 75.8443 | 0.2006 |
| 73.2776 | 73.0 | 6205 | 80.3562 | 0.2058 |
| 69.7834 | 74.0 | 6290 | 77.4640 | 0.2018 |
| 70.2896 | 75.0 | 6375 | 75.3303 | 0.1989 |
| 67.4863 | 76.0 | 6460 | 76.7881 | 0.2021 |
| 69.5997 | 77.0 | 6545 | 73.3460 | 0.1990 |
| 66.8822 | 78.0 | 6630 | 76.5326 | 0.2000 |
| 68.8483 | 79.0 | 6715 | 75.6460 | 0.1996 |
| 64.6421 | 80.0 | 6800 | 73.5708 | 0.1966 |
| 65.7658 | 81.0 | 6885 | 79.4043 | 0.1981 |
| 68.3581 | 82.0 | 6970 | 74.2181 | 0.1995 |
| 66.8769 | 83.0 | 7055 | 74.5230 | 0.1970 |
| 63.3021 | 84.0 | 7140 | 78.5190 | 0.1968 |
| 61.6227 | 85.0 | 7225 | 77.4760 | 0.1974 |
| 62.5638 | 86.0 | 7310 | 79.0764 | 0.1979 |
| 63.4932 | 87.0 | 7395 | 77.3330 | 0.1938 |
| 60.8015 | 88.0 | 7480 | 74.0066 | 0.1913 |
| 60.5176 | 89.0 | 7565 | 76.4915 | 0.1930 |
| 61.0698 | 90.0 | 7650 | 76.3846 | 0.1936 |
| 61.2012 | 91.0 | 7735 | 77.7306 | 0.1916 |
| 59.9138 | 92.0 | 7820 | 74.8689 | 0.1904 |
| 59.955 | 93.0 | 7905 | 77.6994 | 0.1921 |
| 60.1327 | 94.0 | 7990 | 77.2062 | 0.1896 |
| 57.2662 | 95.0 | 8075 | 78.6637 | 0.1926 |
| 60.3225 | 96.0 | 8160 | 79.5939 | 0.1921 |
| 56.1769 | 97.0 | 8245 | 79.2807 | 0.1917 |
| 56.4212 | 98.0 | 8330 | 76.9330 | 0.1904 |
| 55.0239 | 99.0 | 8415 | 76.5063 | 0.1890 |
| 54.8932 | 100.0 | 8500 | 76.7235 | 0.1866 |
| 55.0942 | 101.0 | 8585 | 74.4022 | 0.1875 |
| 53.9534 | 102.0 | 8670 | 76.1983 | 0.1855 |
| 54.8974 | 103.0 | 8755 | 74.1427 | 0.1834 |
| 53.0833 | 104.0 | 8840 | 74.4284 | 0.1845 |
| 54.4095 | 105.0 | 8925 | 73.8318 | 0.1840 |
| 53.0103 | 106.0 | 9010 | 75.3837 | 0.1858 |
| 52.1488 | 107.0 | 9095 | 75.4422 | 0.1845 |
| 52.6274 | 108.0 | 9180 | 81.5232 | 0.1882 |
| 49.8969 | 109.0 | 9265 | 76.7468 | 0.1905 |
| 50.2353 | 110.0 | 9350 | 77.5954 | 0.1889 |
| 48.6322 | 111.0 | 9435 | 77.4254 | 0.1868 |
| 49.8443 | 112.0 | 9520 | 75.5615 | 0.1834 |
| 48.3942 | 113.0 | 9605 | 75.4467 | 0.1829 |
| 50.5596 | 114.0 | 9690 | 76.4219 | 0.1894 |
| 49.3698 | 115.0 | 9775 | 74.8749 | 0.1846 |
| 49.8104 | 116.0 | 9860 | 77.8855 | 0.1846 |
| 46.308 | 117.0 | 9945 | 77.7105 | 0.1877 |
| 48.2955 | 118.0 | 10030 | 75.8736 | 0.1887 |
| 48.086 | 119.0 | 10115 | 78.3174 | 0.1856 |
| 47.3039 | 120.0 | 10200 | 77.9972 | 0.1818 |
| 44.4335 | 121.0 | 10285 | 77.9906 | 0.1831 |
| 44.79 | 122.0 | 10370 | 77.6622 | 0.1829 |
| 45.2491 | 123.0 | 10455 | 74.7864 | 0.1788 |
| 43.4817 | 124.0 | 10540 | 79.8335 | 0.1840 |
| 42.8565 | 125.0 | 10625 | 77.1184 | 0.1823 |
| 43.3137 | 126.0 | 10710 | 78.8980 | 0.1806 |
| 47.5019 | 127.0 | 10795 | 76.0757 | 0.1802 |
| 42.8448 | 128.0 | 10880 | 74.3782 | 0.1805 |
| 43.371 | 129.0 | 10965 | 75.9817 | 0.1763 |
| 42.5875 | 130.0 | 11050 | 75.2765 | 0.1790 |
| 41.3362 | 131.0 | 11135 | 76.6064 | 0.1771 |
| 42.0271 | 132.0 | 11220 | 75.4263 | 0.1784 |
| 39.8784 | 133.0 | 11305 | 77.8300 | 0.1794 |
| 40.6921 | 134.0 | 11390 | 78.6296 | 0.1792 |
| 39.4606 | 135.0 | 11475 | 79.6816 | 0.1778 |
| 37.5287 | 136.0 | 11560 | 78.0326 | 0.1782 |
| 41.5487 | 137.0 | 11645 | 77.2891 | 0.1758 |
| 41.2244 | 138.0 | 11730 | 75.5363 | 0.1758 |
| 38.8745 | 139.0 | 11815 | 78.4477 | 0.1757 |
| 39.4361 | 140.0 | 11900 | 74.8600 | 0.1745 |
| 37.9799 | 141.0 | 11985 | 74.5921 | 0.1767 |
| 40.0375 | 142.0 | 12070 | 75.4366 | 0.1755 |
| 38.1776 | 143.0 | 12155 | 76.9755 | 0.1757 |
| 39.0457 | 144.0 | 12240 | 78.5006 | 0.1783 |
| 36.8371 | 145.0 | 12325 | 74.9189 | 0.1755 |
| 36.6938 | 146.0 | 12410 | 78.4304 | 0.1746 |
| 35.208 | 147.0 | 12495 | 79.0332 | 0.1774 |
| 36.08 | 148.0 | 12580 | 77.9339 | 0.1746 |
| 37.4205 | 149.0 | 12665 | 76.0473 | 0.1748 |
| 36.1532 | 150.0 | 12750 | 77.6417 | 0.1740 |
| 36.4478 | 151.0 | 12835 | 77.7077 | 0.1740 |
| 35.2669 | 152.0 | 12920 | 77.4225 | 0.1728 |
| 33.9716 | 153.0 | 13005 | 76.0476 | 0.1722 |
| 33.7335 | 154.0 | 13090 | 75.8777 | 0.1717 |
| 33.2638 | 155.0 | 13175 | 78.7736 | 0.1716 |
| 32.744 | 156.0 | 13260 | 75.9818 | 0.1692 |
| 33.7618 | 157.0 | 13345 | 77.9544 | 0.1705 |
| 32.5823 | 158.0 | 13430 | 74.5033 | 0.1710 |
| 32.435 | 159.0 | 13515 | 77.1456 | 0.1703 |
| 32.631 | 160.0 | 13600 | 75.2885 | 0.1706 |
| 31.8537 | 161.0 | 13685 | 76.6699 | 0.1674 |
| 32.7374 | 162.0 | 13770 | 77.5112 | 0.1679 |
| 31.7985 | 163.0 | 13855 | 77.2261 | 0.1686 |
| 33.4709 | 164.0 | 13940 | 77.0829 | 0.1688 |
| 32.5837 | 165.0 | 14025 | 81.3337 | 0.1688 |
| 31.3551 | 166.0 | 14110 | 77.3803 | 0.1672 |
| 30.5367 | 167.0 | 14195 | 79.0103 | 0.1689 |
| 30.7095 | 168.0 | 14280 | 77.3184 | 0.1683 |
| 31.0545 | 169.0 | 14365 | 77.5170 | 0.1675 |
| 29.7835 | 170.0 | 14450 | 76.5517 | 0.1661 |
| 24.643 | 171.0 | 14535 | 77.7856 | 0.1684 |
| 29.8659 | 172.0 | 14620 | 78.2275 | 0.1689 |
| 29.8893 | 173.0 | 14705 | 76.9425 | 0.1677 |
| 29.0071 | 174.0 | 14790 | 76.2374 | 0.1674 |
| 28.8064 | 175.0 | 14875 | 77.7253 | 0.1657 |
| 28.1371 | 176.0 | 14960 | 77.0664 | 0.1666 |
| 28.3809 | 177.0 | 15045 | 77.4184 | 0.1659 |
| 27.953 | 178.0 | 15130 | 77.5284 | 0.1651 |
| 29.4455 | 179.0 | 15215 | 76.8801 | 0.1647 |
| 27.7792 | 180.0 | 15300 | 75.6964 | 0.1638 |
| 29.7077 | 181.0 | 15385 | 77.7636 | 0.1648 |
| 28.0373 | 182.0 | 15470 | 77.2047 | 0.1655 |
| 27.5775 | 183.0 | 15555 | 77.2836 | 0.1631 |
| 26.3244 | 184.0 | 15640 | 77.2574 | 0.1645 |
| 27.4902 | 185.0 | 15725 | 77.4289 | 0.1649 |
| 27.4503 | 186.0 | 15810 | 76.1098 | 0.1636 |
| 25.7041 | 187.0 | 15895 | 77.4126 | 0.1627 |
| 26.0029 | 188.0 | 15980 | 77.8391 | 0.1640 |
| 26.2039 | 189.0 | 16065 | 77.9678 | 0.1644 |
| 25.3233 | 190.0 | 16150 | 77.9595 | 0.1636 |
| 26.3017 | 191.0 | 16235 | 77.7247 | 0.1640 |
| 25.3848 | 192.0 | 16320 | 77.0303 | 0.1631 |
| 26.489 | 193.0 | 16405 | 77.0221 | 0.1632 |
| 24.5612 | 194.0 | 16490 | 77.1831 | 0.1632 |
| 24.3228 | 195.0 | 16575 | 77.3499 | 0.1638 |
| 24.7961 | 196.0 | 16660 | 77.6399 | 0.1633 |
| 26.368 | 197.0 | 16745 | 77.8759 | 0.1639 |
| 26.0979 | 198.0 | 16830 | 77.9501 | 0.1634 |
| 26.2053 | 199.0 | 16915 | 77.8439 | 0.1633 |
| 25.9718 | 200.0 | 17000 | 77.9014 | 0.1633 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 1.18.4
- Tokenizers 0.12.1
|
Splend1dchan/wav2vec2-large-lv60_mt5-base_nofreeze_bs64_drop.3 | 1b1f510ac52865f5a42fcc0bcf43c6fce5eaef15 | 2022-06-02T16:25:05.000Z | [
"pytorch",
"speechmix",
"transformers"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_mt5-base_nofreeze_bs64_drop.3 | 1 | null | transformers | 32,536 | Entry not found |
wrice/wav2vec2-large-robust-ft-timit | 37e41734af51fd8806a57b872cdf139ccef58d97 | 2022-05-31T22:17:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | wrice | null | wrice/wav2vec2-large-robust-ft-timit | 1 | null | transformers | 32,537 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-robust-ft-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-robust-ft-timit
This model is a fine-tuned version of [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2768
- Wer: 0.2321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.6175 | 1.0 | 500 | 3.3025 | 1.0 |
| 3.0746 | 2.01 | 1000 | 2.9598 | 1.0 |
| 1.967 | 3.01 | 1500 | 0.6760 | 0.5607 |
| 0.7545 | 4.02 | 2000 | 0.4500 | 0.4567 |
| 0.5415 | 5.02 | 2500 | 0.3702 | 0.3882 |
| 0.4445 | 6.02 | 3000 | 0.3421 | 0.3584 |
| 0.3601 | 7.03 | 3500 | 0.2947 | 0.3096 |
| 0.3098 | 8.03 | 4000 | 0.2740 | 0.2894 |
| 0.2606 | 9.04 | 4500 | 0.2725 | 0.2787 |
| 0.238 | 10.04 | 5000 | 0.2549 | 0.2617 |
| 0.2142 | 11.04 | 5500 | 0.2485 | 0.2530 |
| 0.1787 | 12.05 | 6000 | 0.2683 | 0.2514 |
| 0.1652 | 13.05 | 6500 | 0.2559 | 0.2476 |
| 0.1569 | 14.06 | 7000 | 0.2777 | 0.2470 |
| 0.1443 | 15.06 | 7500 | 0.2661 | 0.2431 |
| 0.1335 | 16.06 | 8000 | 0.2717 | 0.2422 |
| 0.1291 | 17.07 | 8500 | 0.2672 | 0.2428 |
| 0.1192 | 18.07 | 9000 | 0.2684 | 0.2395 |
| 0.1144 | 19.08 | 9500 | 0.2770 | 0.2411 |
| 0.1052 | 20.08 | 10000 | 0.2831 | 0.2379 |
| 0.1004 | 21.08 | 10500 | 0.2847 | 0.2375 |
| 0.1053 | 22.09 | 11000 | 0.2851 | 0.2360 |
| 0.1005 | 23.09 | 11500 | 0.2807 | 0.2361 |
| 0.0904 | 24.1 | 12000 | 0.2764 | 0.2346 |
| 0.0876 | 25.1 | 12500 | 0.2774 | 0.2325 |
| 0.0883 | 26.1 | 13000 | 0.2768 | 0.2313 |
| 0.0848 | 27.11 | 13500 | 0.2840 | 0.2307 |
| 0.0822 | 28.11 | 14000 | 0.2812 | 0.2316 |
| 0.09 | 29.12 | 14500 | 0.2768 | 0.2321 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.8.2+cu111
- Datasets 1.17.0
- Tokenizers 0.11.6
|
meghazisofiane/opus-mt-en-ar-finetuned-en-to-ar | cad0f31af8c513863b0dcacab285c162f194d9ef | 2022-06-03T17:27:04.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:un_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | meghazisofiane | null | meghazisofiane/opus-mt-en-ar-finetuned-en-to-ar | 1 | null | transformers | 32,538 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- un_multi
metrics:
- bleu
model-index:
- name: opus-mt-en-ar-finetuned-en-to-ar
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: un_multi
type: un_multi
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 64.6767
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-finetuned-en-to-ar
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the un_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8133
- Bleu: 64.6767
- Gen Len: 17.595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 50 | 0.7710 | 64.3416 | 17.4 |
| No log | 2.0 | 100 | 0.7569 | 63.9546 | 17.465 |
| No log | 3.0 | 150 | 0.7570 | 64.7484 | 17.385 |
| No log | 4.0 | 200 | 0.7579 | 65.4073 | 17.305 |
| No log | 5.0 | 250 | 0.7624 | 64.8939 | 17.325 |
| No log | 6.0 | 300 | 0.7696 | 65.1257 | 17.45 |
| No log | 7.0 | 350 | 0.7747 | 65.527 | 17.395 |
| No log | 8.0 | 400 | 0.7791 | 65.1357 | 17.52 |
| No log | 9.0 | 450 | 0.7900 | 65.3812 | 17.415 |
| 0.3982 | 10.0 | 500 | 0.7925 | 65.7346 | 17.39 |
| 0.3982 | 11.0 | 550 | 0.7951 | 65.1267 | 17.62 |
| 0.3982 | 12.0 | 600 | 0.8040 | 64.6874 | 17.495 |
| 0.3982 | 13.0 | 650 | 0.8069 | 64.7788 | 17.52 |
| 0.3982 | 14.0 | 700 | 0.8105 | 64.6701 | 17.585 |
| 0.3982 | 15.0 | 750 | 0.8120 | 64.7111 | 17.58 |
| 0.3982 | 16.0 | 800 | 0.8133 | 64.6767 | 17.595 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ThePixOne/SeconBERTa | 427aec37ecc469cd938478c8e859219059a3c5f8 | 2022-05-31T19:53:48.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ThePixOne | null | ThePixOne/SeconBERTa | 1 | null | sentence-transformers | 32,539 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 20799 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 4159.8,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Dizzykong/test-recipe | a37f3e6909adecb86352087eb986506b8cfff9ea | 2022-05-31T21:17:01.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | Dizzykong | null | Dizzykong/test-recipe | 1 | null | transformers | 32,540 | ---
tags:
- generated_from_trainer
model-index:
- name: test-recipe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-recipe
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.001
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Dizzykong/test-charles-dickens | 279d9599376fc7810330faf288957980a524ded3 | 2022-05-31T21:22:30.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | Dizzykong | null | Dizzykong/test-charles-dickens | 1 | null | transformers | 32,541 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: test-charles-dickens
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-charles-dickens
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
erickfm/t5-small-finetuned-bias | 220a1ca3ce75e905419adc5b63019a60f39401f0 | 2022-06-01T02:02:16.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:WNC",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-small-finetuned-bias | 1 | null | transformers | 32,542 | ---
language:
- en
license: apache-2.0
datasets:
- WNC
metrics:
- accuracy
---
This model is a fine-tune checkpoint of [T5-small](https://huggingface.co/t5-small), fine-tuned on the [Wiki Neutrality Corpus (WNC)](https://github.com/rpryzant/neutralizing-bias), a labeled dataset composed of 180,000 biased and neutralized sentence pairs that are generated from Wikipedia edits tagged for “neutral point of view”. This model reaches an accuracy of 0.32 on a dev split of the WNC.
For more details about T5, check out this [model card](https://huggingface.co/t5-small).
|
adache/xlm-roberta-base-finetuned-panx-de-fr | a5eae7931cccc52060eb0e5dee56db98d9a36286 | 2022-06-01T06:47:31.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | adache | null | adache/xlm-roberta-base-finetuned-panx-de-fr | 1 | null | transformers | 32,543 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1644
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1471 | 2.0 | 1430 | 0.1627 | 0.8509 |
| 0.0947 | 3.0 | 2145 | 0.1644 | 0.8617 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
adache/xlm-roberta-base-finetuned-panx-fr | 203bd4b69cce6af55fad956613622d925c277b2a | 2022-06-01T07:13:59.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | adache | null | adache/xlm-roberta-base-finetuned-panx-fr | 1 | null | transformers | 32,544 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8053736356003358
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3196
- F1: 0.8054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7741 | 1.0 | 96 | 0.3784 | 0.7542 |
| 0.3235 | 2.0 | 192 | 0.3267 | 0.7947 |
| 0.2164 | 3.0 | 288 | 0.3196 | 0.8054 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
adache/xlm-roberta-base-finetuned-panx-it | 174d2bd46cb9245329203339321236bfdd7782bc | 2022-06-01T07:33:52.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | adache | null | adache/xlm-roberta-base-finetuned-panx-it | 1 | null | transformers | 32,545 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8247845711940912
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2421
- F1: 0.8248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.809 | 1.0 | 70 | 0.3380 | 0.7183 |
| 0.2939 | 2.0 | 140 | 0.2582 | 0.7977 |
| 0.1813 | 3.0 | 210 | 0.2421 | 0.8248 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
adache/xlm-roberta-base-finetuned-panx-en | 259010ef20fc52b81d38c8e730f437d13b5af321 | 2022-06-01T07:53:50.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | adache | null | adache/xlm-roberta-base-finetuned-panx-en | 1 | null | transformers | 32,546 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.692179700499168
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3921
- F1: 0.6922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1465 | 1.0 | 50 | 0.5838 | 0.4777 |
| 0.5055 | 2.0 | 100 | 0.4477 | 0.6374 |
| 0.3713 | 3.0 | 150 | 0.3921 | 0.6922 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ceggian/sbart_pt_reddit_softmax_64 | a74d53c35fd8879f9d10dff1f28a32ea114ecf01 | 2022-06-01T07:46:44.000Z | [
"pytorch",
"bart",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ceggian | null | ceggian/sbart_pt_reddit_softmax_64 | 1 | null | sentence-transformers | 32,547 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 117759 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11775,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BartModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
adache/xlm-roberta-base-finetuned-panx-all | 33243f743e186cc7a5918122a0d3d25d47cdda12 | 2022-06-01T08:20:34.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | adache | null | adache/xlm-roberta-base-finetuned-panx-all | 1 | null | transformers | 32,548 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1782
- F1: 0.8541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2995 | 1.0 | 739 | 0.1891 | 0.8085 |
| 0.1552 | 2.0 | 1478 | 0.1798 | 0.8425 |
| 0.1008 | 3.0 | 2217 | 0.1782 | 0.8541 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
elisabethvonoswald/wav2vec2-large-xls-r-300m-2022-06-01 | 3042e0c8ee9d24dfe958412e85e0a25d72968f84 | 2022-06-01T10:05:23.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | elisabethvonoswald | null | elisabethvonoswald/wav2vec2-large-xls-r-300m-2022-06-01 | 1 | null | transformers | 32,549 | Entry not found |
KM4STfulltext/SSCI-BERT-e4 | 55e70d3368e38378e06474002dd78f03f074cc9e | 2022-06-01T09:25:35.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | KM4STfulltext | null | KM4STfulltext/SSCI-BERT-e4 | 1 | null | transformers | 32,550 | ---
license: apache-2.0
---
# SSCI-BERT: A pretrained language model for social scientific text
## Introduction
The research for social science texts needs the support natural language processing tools.
The pre-trained language model has greatly improved the accuracy of text mining in general texts. At present, there is an urgent need for a pre-trained language model specifically for the automatic processing of scientific texts in social science.
We used the abstract of social science research as the training set. Based on the deep language model framework of BERT, we constructed [SSCI-BERT and SSCI-SciBERT](https://github.com/S-T-Full-Text-Knowledge-Mining/SSCI-BERT) pre-training language models by [transformers/run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py).
We designed four downstream tasks of Text Classification on different social scientific article corpus to verify the performance of the model.
- SSCI-BERT and SSCI-SciBERT are trained on the abstract of articles published in SSCI journals from 1986 to 2021. The training set involved in the experiment included a total of `503910614 words`.
- Based on the idea of Domain-Adaptive Pretraining, `SSCI-BERT` and `SSCI-SciBERT` combine a large amount of abstracts of scientific articles based on the BERT structure, and continue to train the BERT and SSCI-SciBERT models respectively to obtain pre-training models for the automatic processing of Social science research texts.
## News
- 2022-03-24 : SSCIBERT and SSCI-SciBERT has been put forward for the first time.
## How to use
### Huggingface Transformers
The `from_pretrained` method based on [Huggingface Transformers](https://github.com/huggingface/transformers) can directly obtain SSCI-BERT and SSCI-SciBERT models online.
- SSCI-BERT
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("KM4STfulltext/SSCI-BERT-e2")
model = AutoModel.from_pretrained("KM4STfulltext/SSCI-BERT-e2")
```
- SSCI-SciBERT
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("KM4STfulltext/SSCI-SciBERT-e2")
model = AutoModel.from_pretrained("KM4STfulltext/SSCI-SciBERT-e2")
```
### Download Models
- The version of the model we provide is `PyTorch`.
### From Huggingface
- Download directly through Huggingface's official website.
- [KM4STfulltext/SSCI-BERT-e2](https://huggingface.co/KM4STfulltext/SSCI-BERT-e2)
- [KM4STfulltext/SSCI-SciBERT-e2](https://huggingface.co/KM4STfulltext/SSCI-SciBERT-e2)
- [KM4STfulltext/SSCI-BERT-e4 ](https://huggingface.co/KM4STfulltext/SSCI-BERT-e4)
- [KM4STfulltext/SSCI-SciBERT-e4](https://huggingface.co/KM4STfulltext/SSCI-SciBERT-e4)
### From Google Drive
We have put the model on Google Drive for users.
| Model | DATASET(year) | Base Model |
| ------------------------------------------------------------ | ------------- | ---------------------- |
| [SSCI-BERT-e2](https://drive.google.com/drive/folders/1xEDnovlwGO2JxqCaf3rdjS2cB6DOxhj4?usp=sharing) | 1986-2021 | Bert-base-cased |
| [SSCI-SciBERT-e2](https://drive.google.com/drive/folders/16DtIvnHvbrR_92MwgthRRsULW6An9te1?usp=sharing) (recommended) | 1986-2021 | Scibert-scivocab-cased |
| [SSCI-BERT-e4](https://drive.google.com/drive/folders/1sr6Av8p904Jrjps37g7E8aj4HnAHXSxW?usp=sharing) | 1986-2021 | Bert-base-cased |
| [SSCI-SciBERT-e4](https://drive.google.com/drive/folders/1ty-b4TIFu8FbilgC4VcI7Bgn_O5MDMVe?usp=sharing) | 1986-2021 | Scibert-scivocab-cased |
## Evaluation & Results
- We use SSCI-BERT and SSCI-SciBERT to perform Text Classificationon different social science research corpus. The experimental results are as follows. Relevant data sets are available for download in the **Verification task datasets** folder of this project.
#### JCR Title Classify Dataset
| Model | accuracy | macro avg | weighted avg |
| ---------------------- | -------- | --------- | ------------ |
| Bert-base-cased | 28.43 | 22.06 | 21.86 |
| Scibert-scivocab-cased | 38.48 | 33.89 | 33.92 |
| SSCI-BERT-e2 | 40.43 | 35.37 | 35.33 |
| SSCI-SciBERT-e2 | 41.35 | 37.27 | 37.25 |
| SSCI-BERT-e4 | 40.65 | 35.49 | 35.40 |
| SSCI-SciBERT-e4 | 41.13 | 36.96 | 36.94 |
| Support | 2300 | 2300 | 2300 |
#### JCR Abstract Classify Dataset
| Model | accuracy | macro avg | weighted avg |
| ---------------------- | -------- | --------- | ------------ |
| Bert-base-cased | 48.59 | 42.8 | 42.82 |
| Scibert-scivocab-cased | 55.59 | 51.4 | 51.81 |
| SSCI-BERT-e2 | 58.05 | 53.31 | 53.73 |
| SSCI-SciBERT-e2 | 59.95 | 56.51 | 57.12 |
| SSCI-BERT-e4 | 59.00 | 54.97 | 55.59 |
| SSCI-SciBERT-e4 | 60.00 | 56.38 | 56.90 |
| Support | 2200 | 2200 | 2200 |
#### JCR Mixed Titles and Abstracts Dataset
| **Model** | **accuracy** | **macro avg** | **weighted avg** |
| ---------------------- | ------------ | -------------- | ----------------- |
| Bert-base-cased | 58.24 | 57.27 | 57.25 |
| Scibert-scivocab-cased | 59.58 | 58.65 | 58.68 |
| SSCI-BERT-e2 | 60.89 | 60.24 | 60.30 |
| SSCI-SciBERT-e2 | 60.96 | 60.54 | 60.51 |
| SSCI-BERT-e4 | 61.00 | 60.48 | 60.43 |
| SSCI-SciBERT-e4 | 61.24 | 60.71 | 60.75 |
| Support | 4500 | 4500 | 4500 |
#### SSCI Abstract Structural Function Recognition (Classify Dataset)
| | Bert-base-cased | SSCI-BERT-e2 | SSCI-BERT-e4 | support |
| ------------ | -------------------------- | ------------------- | ------------------- | ----------- |
| B | 63.77 | 64.29 | 64.63 | 224 |
| P | 53.66 | 57.14 | 57.99 | 95 |
| M | 87.63 | 88.43 | 89.06 | 323 |
| R | 86.81 | 88.28 | **88.47** | 419 |
| C | 78.32 | 79.82 | 78.95 | 316 |
| accuracy | 79.59 | 80.9 | 80.97 | 1377 |
| macro avg | 74.04 | 75.59 | 75.82 | 1377 |
| weighted avg | 79.02 | 80.32 | 80.44 | 1377 |
| | **Scibert-scivocab-cased** | **SSCI-SciBERT-e2** | **SSCI-SciBERT-e4** | **support** |
| B | 69.98 | **70.95** | **70.95** | 224 |
| P | 58.89 | **60.12** | 58.96 | 95 |
| M | 89.37 | **90.12** | 88.11 | 323 |
| R | 87.66 | 88.07 | 87.44 | 419 |
| C | 80.7 | 82.61 | **82.94** | 316 |
| accuracy | 81.63 | **82.72** | 82.06 | 1377 |
| macro avg | 77.32 | **78.37** | 77.68 | 1377 |
| weighted avg | 81.6 | **82.58** | 81.92 | 1377 |
## Cited
- If our content is helpful for your research work, please quote our research in your article.
- If you want to quote our research, you can use this url (https://github.com/S-T-Full-Text-Knowledge-Mining/SSCI-BERT) as an alternative before our paper is published.
## Disclaimer
- The experimental results presented in the report only show the performance under a specific data set and hyperparameter combination, and cannot represent the essence of each model. The experimental results may change due to random number seeds and computing equipment.
- **Users can use the model arbitrarily within the scope of the license, but we are not responsible for the direct or indirect losses caused by using the content of the project.**
## Acknowledgment
- SSCI-BERT was trained based on [BERT-Base-Cased]([google-research/bert: TensorFlow code and pre-trained models for BERT (github.com)](https://github.com/google-research/bert)).
- SSCI-SciBERT was trained based on [scibert-scivocab-cased]([allenai/scibert: A BERT model for scientific text. (github.com)](https://github.com/allenai/scibert))
|
tau/False_large_pmi_para0_sent1_span2_itTrue_sargmax_rrFalse_8_1024_0.3_seed2_epoch1 | a46a68439c78a8514ddabdf6a7ec75fbc6288ee7 | 2022-06-01T11:29:19.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/False_large_pmi_para0_sent1_span2_itTrue_sargmax_rrFalse_8_1024_0.3_seed2_epoch1 | 1 | null | transformers | 32,551 | Entry not found |
tau/False_large_pmi_para0_sent1_span2_itTrue_sargmax_rrFalse_8_1024_0.3_seed1_epoch1 | 882a1f30925727618c010e62f2fff712f1fe828f | 2022-06-01T11:33:33.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/False_large_pmi_para0_sent1_span2_itTrue_sargmax_rrFalse_8_1024_0.3_seed1_epoch1 | 1 | null | transformers | 32,552 | Entry not found |
tau/False_large_pmi_para0_sent1_span2_itTrue_sargmax_rrFalse_7_1024_0.3_seed1_epoch1 | a5ff8d209fbad5c5205d671f3b575886c0baa945 | 2022-06-01T11:39:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/False_large_pmi_para0_sent1_span2_itTrue_sargmax_rrFalse_7_1024_0.3_seed1_epoch1 | 1 | null | transformers | 32,553 | Entry not found |
tau/False_large_pmi_para0_sent1_span2_itTrue_sargmax_rrFalse_7_1024_0.3_seed2_epoch1 | b398f785e6a4213a0c3ac69a3e158e1beb7a15aa | 2022-06-01T11:43:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/False_large_pmi_para0_sent1_span2_itTrue_sargmax_rrFalse_7_1024_0.3_seed2_epoch1 | 1 | null | transformers | 32,554 | Entry not found |
jxm/u-PMLM-R | a973b1c9b0909c18d88e0c2f66c75a2d1546272b | 2022-06-01T16:12:46.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | jxm | null | jxm/u-PMLM-R | 1 | null | transformers | 32,555 | Entry not found |
VanessaSchenkel/unicamp-finetuned-en-to-pt-dataset-ted | 0e7242ac5f9b5500d5e7d685537ea542bb2f5365 | 2022-06-01T22:38:09.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:ted_iwlst2013",
"transformers",
"translation",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | translation | false | VanessaSchenkel | null | VanessaSchenkel/unicamp-finetuned-en-to-pt-dataset-ted | 1 | null | transformers | 32,556 | ---
tags:
- translation
- generated_from_trainer
datasets:
- ted_iwlst2013
metrics:
- bleu
model-index:
- name: unicamp-finetuned-en-to-pt-dataset-ted
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: ted_iwlst2013
type: ted_iwlst2013
args: en-pt
metrics:
- name: Bleu
type: bleu
value: 25.65030250145235
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unicamp-finetuned-en-to-pt-dataset-ted
This model is a fine-tuned version of [unicamp-dl/translation-pt-en-t5](https://huggingface.co/unicamp-dl/translation-pt-en-t5) on the ted_iwlst2013 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8861
- Bleu: 25.6503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
sagnikrayc/prajjwal-bert-small-mnli | aaae6430ff7a2b7f1d98af5bb10c447a1677fda7 | 2022-06-01T18:23:28.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | sagnikrayc | null | sagnikrayc/prajjwal-bert-small-mnli | 1 | null | transformers | 32,557 | Entry not found |
SoulCaliber/DialoGPT-small-Saber111 | 6a7abb01e925ecb7ad9dd63c65f014dbacecb3e0 | 2022-06-01T18:26:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | SoulCaliber | null | SoulCaliber/DialoGPT-small-Saber111 | 1 | null | transformers | 32,558 | ---
tags:
- conversational
---
# My Awesome Model
|
lmqg/t5-base-subjqa-books | 691fe2e3f8ee03b10e8543416bd3fb8127b32ab9 | 2022-06-02T13:12:25.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-base-subjqa-books | 1 | null | transformers | 32,559 | Entry not found |
lmqg/t5-base-subjqa-electronics | a633448f62b7b9a5b5ff72fbe6293a283638b806 | 2022-06-02T15:16:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-base-subjqa-electronics | 1 | null | transformers | 32,560 | Entry not found |
income/jpq-question_encoder-base-msmarco-roberta-star | cfc6591f05688ba65cd8601ac303ce73b30e886a | 2022-06-01T22:36:58.000Z | [
"pytorch",
"roberta",
"transformers",
"license:apache-2.0"
] | null | false | income | null | income/jpq-question_encoder-base-msmarco-roberta-star | 1 | null | transformers | 32,561 | ---
license: apache-2.0
---
|
income/jpq-document_encoder-base-msmarco-roberta-star | 5537b8dcd62f49d2eff98c478b0d4e974c8bad6a | 2022-06-01T22:40:09.000Z | [
"pytorch",
"roberta",
"transformers",
"license:apache-2.0"
] | null | false | income | null | income/jpq-document_encoder-base-msmarco-roberta-star | 1 | null | transformers | 32,562 | ---
license: apache-2.0
---
|
lmqg/t5-small-subjqa-movies | 6f2db90e569d4e94f07f903f5e79c0f2d94cca56 | 2022-06-02T18:51:18.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-subjqa-movies | 1 | null | transformers | 32,563 | Entry not found |
dkasti/xlm-roberta-base-finetuned-panx-de-fr | d4fda6b3c93fb034881ccf5873a8092467ff19ae | 2022-06-02T01:56:17.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | dkasti | null | dkasti/xlm-roberta-base-finetuned-panx-de-fr | 1 | null | transformers | 32,564 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1649
- F1: 0.8555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2883 | 1.0 | 715 | 0.1818 | 0.8286 |
| 0.1461 | 2.0 | 1430 | 0.1539 | 0.8511 |
| 0.095 | 3.0 | 2145 | 0.1649 | 0.8555 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dkasti/xlm-roberta-base-finetuned-panx-fr | ecea0cd8b5ecc7b2c3fcc3aaf662f1d83f851f55 | 2022-06-02T02:03:12.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | dkasti | null | dkasti/xlm-roberta-base-finetuned-panx-fr | 1 | null | transformers | 32,565 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.839946200403497
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2789
- F1: 0.8399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.587 | 1.0 | 191 | 0.3355 | 0.7929 |
| 0.274 | 2.0 | 382 | 0.2977 | 0.8283 |
| 0.1836 | 3.0 | 573 | 0.2789 | 0.8399 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dkasti/xlm-roberta-base-finetuned-panx-it | dfc6757ccb03dc43f8891a7d861876834b84198c | 2022-06-02T02:05:41.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | dkasti | null | dkasti/xlm-roberta-base-finetuned-panx-it | 1 | null | transformers | 32,566 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8233360723089564
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2388
- F1: 0.8233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8099 | 1.0 | 70 | 0.3035 | 0.7333 |
| 0.2766 | 2.0 | 140 | 0.2661 | 0.7948 |
| 0.1792 | 3.0 | 210 | 0.2388 | 0.8233 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dkasti/xlm-roberta-base-finetuned-panx-en | 3972042d00b1120703a46edfcef27759421bb05a | 2022-06-02T02:07:48.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | dkasti | null | dkasti/xlm-roberta-base-finetuned-panx-en | 1 | null | transformers | 32,567 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6885793871866295
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3996
- F1: 0.6886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1301 | 1.0 | 50 | 0.5666 | 0.4857 |
| 0.5143 | 2.0 | 100 | 0.4469 | 0.6449 |
| 0.3723 | 3.0 | 150 | 0.3996 | 0.6886 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dkasti/xlm-roberta-base-finetuned-panx-all | 71d4282424f3205e5eedb943ad800a71d5165936 | 2022-06-02T02:24:54.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | dkasti | null | dkasti/xlm-roberta-base-finetuned-panx-all | 1 | null | transformers | 32,568 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1769
- F1: 0.8533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3049 | 1.0 | 835 | 0.1873 | 0.8139 |
| 0.1576 | 2.0 | 1670 | 0.1722 | 0.8403 |
| 0.1011 | 3.0 | 2505 | 0.1769 | 0.8533 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
PSW/samsum_reverse_train_distilbart_xsum_12-3_epoch3 | 19f4c32b635a818c9d327db34e2c1cd3fdc3e328 | 2022-06-02T04:42:54.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_12-3_epoch3 | 1 | null | transformers | 32,569 | Entry not found |
callmefons/t5-small | 9bea3749b7dfcd3a8e8b92a2d73a5055faa58cc9 | 2022-06-02T05:25:13.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | callmefons | null | callmefons/t5-small | 1 | null | transformers | 32,570 | ---
tags:
- generated_from_trainer
model-index:
- name: t5-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 12 | 3.1840 | 3.5714 | 1.7857 | 3.5714 | 3.5714 | 19.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
callmefons/mt5-small | f9d0131ab764891c3c6bc3bcefb580a740b82651 | 2022-06-02T05:28:09.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | callmefons | null | callmefons/mt5-small | 1 | null | transformers | 32,571 | ---
tags:
- generated_from_trainer
model-index:
- name: mt5-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 12 | 3.0287 | 2.7473 | 1.9481 | 2.7473 | 2.7473 | 19.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
PSW/samsum_reverse_train_distilbart_xsum_12-3_minlen10_epoch3 | 018ab62fb6adeb41aaa90b2a7ce2407c13c712ec | 2022-06-02T06:11:20.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_12-3_minlen10_epoch3 | 1 | null | transformers | 32,572 | Entry not found |
erickfm/t5-large-finetuned-bias-m | c4fc3cfcce6cb91fa9b252a32de4d5818e2140a8 | 2022-06-02T06:07:44.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-large-finetuned-bias-m | 1 | null | transformers | 32,573 | ---
license: apache-2.0
---
|
202015004/UA_low_training_shreya | 7f1e203c75db72acb80d15d5e84b20ebf0707709 | 2022-06-02T12:45:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 202015004 | null | 202015004/UA_low_training_shreya | 1 | null | transformers | 32,574 | Entry not found |
PSW/samsum_reverse_train_distilbart_xsum_9-6_sampling_min10max2000_epoch3 | 0ef7b13f941fe6f83208337448e110300f0db219 | 2022-06-02T07:46:50.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_sampling_min10max2000_epoch3 | 1 | null | transformers | 32,575 | Entry not found |
Splend1dchan/xtreme_s_xlsr_byt5-small_minds14.en-all | 7ca5a0c2bb04433549b8033ed7252baeee8a1212 | 2022-06-02T21:59:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"transformers"
] | null | false | Splend1dchan | null | Splend1dchan/xtreme_s_xlsr_byt5-small_minds14.en-all | 1 | null | transformers | 32,576 | Entry not found |
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_epoch3 | 5dd6ea33b3f97300dfb08c5ba8514c8b2b552abe | 2022-06-02T09:13:19.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_epoch3 | 1 | null | transformers | 32,577 | Entry not found |
creynier/wav2vec2-base-swbd-turn-eos-long_short1-8s_utt_removed_4percent2 | e2def4c876c75a75458f1dca319ae73746152d4f | 2022-06-02T10:06:38.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-turn-eos-long_short1-8s_utt_removed_4percent2 | 1 | null | transformers | 32,578 | Entry not found |
Lolaibrin/distilbert-base-uncased-finetuned-squad | 6eb4ddf658a1188f4c6fbb47d66d69f1631a2c24 | 2022-06-02T13:43:10.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Lolaibrin | null | Lolaibrin/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 32,579 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.4952 | 1.0 | 5533 | 1.3895 |
| 1.3024 | 2.0 | 11066 | 1.2490 |
| 1.2087 | 3.0 | 16599 | 1.2108 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
PSW/samsum_reverse_train_distilbart_xsum_9-6_sampling_min40max2000_epoch3 | 8ec1d3125a5e58295607429c967e9b33b1bf0656 | 2022-06-02T11:27:33.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_sampling_min40max2000_epoch3 | 1 | null | transformers | 32,580 | Entry not found |
AAkhilesh/wav2vec2-large-xls-r-300m-hsb-colab | b073acc518bab7499294b09a8d0cfac58c04dd35 | 2022-06-02T13:57:16.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | AAkhilesh | null | AAkhilesh/wav2vec2-large-xls-r-300m-hsb-colab | 1 | null | transformers | 32,581 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hsb-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hsb-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
brindap/wav2vec2-large-xls-r-300m-hsb-colab | e453861ac8979745ba1f40edcc1cab81cf3702d5 | 2022-06-03T06:56:19.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | brindap | null | brindap/wav2vec2-large-xls-r-300m-hsb-colab | 1 | null | transformers | 32,582 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hsb-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hsb-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2283
- Wer: 0.9818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 17.2414 | 5.56 | 50 | 7.6790 | 1.0 |
| 5.5913 | 11.11 | 100 | 4.1167 | 1.0 |
| 3.8478 | 16.67 | 150 | 3.3965 | 1.0 |
| 3.3442 | 22.22 | 200 | 3.2828 | 1.0 |
| 3.2219 | 27.78 | 250 | 3.2283 | 0.9818 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Danastos/squad_bert_el_4 | 90cd86ebfa62aae6b2b3cef739255b97d452878a | 2022-06-19T12:57:10.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Danastos | null | Danastos/squad_bert_el_4 | 1 | null | transformers | 32,583 | Entry not found |
ducnapa/apes | cb370421c2b070ee356a2703366ceb02385c61db | 2022-06-02T15:17:57.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | ducnapa | null | ducnapa/apes | 1 | null | transformers | 32,584 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: apes
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8999999761581421
---
# apes
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### chimpanzee

#### gibbon

#### gorilla

#### orangutan
 |
vftnr/ar_en | d8f5f5f0462c03d7c66a676e417c6e7b14a162db | 2022-06-02T15:44:07.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vftnr | null | vftnr/ar_en | 1 | null | transformers | 32,585 | Entry not found |
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_epoch6 | aa5018e08420ad746d05ad26114edb26523d4c85 | 2022-06-02T16:08:32.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_epoch6 | 1 | null | transformers | 32,586 | Entry not found |
Bistolero/nl_one_ep | fe143ebc3acf913eb73ec921ffd50ce70a48d203 | 2022-06-02T16:52:02.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/nl_one_ep | 1 | null | transformers | 32,587 | Entry not found |
huggingtweets/davemomi | daf969d4a269b6a6b9b8c6cf7b7050612d277fee | 2022-06-02T18:30:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/davemomi | 1 | null | transformers | 32,588 | ---
language: en
thumbnail: http://www.huggingtweets.com/davemomi/1654194627703/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1171375301768744961/QZbLbdu8_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Davide Momi</div>
<div style="text-align: center; font-size: 14px;">@davemomi</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Davide Momi.
| Data | Davide Momi |
| --- | --- |
| Tweets downloaded | 273 |
| Retweets | 56 |
| Short tweets | 31 |
| Tweets kept | 186 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4crkiv7x/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @davemomi's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2oh3qlzu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2oh3qlzu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/davemomi')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Mudassar/wav2vec2-base-timit-demo-colab53 | 47d1c840a7c5ba305c43d16e5d2415c1f773e739 | 2022-06-02T23:03:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Mudassar | null | Mudassar/wav2vec2-base-timit-demo-colab53 | 1 | null | transformers | 32,589 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab53
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
sanamoin/wav2vec2-base-timit-demo-google-colab | f7dff0d850ed580407d6253c6e0e578c30fe88d1 | 2022-06-07T09:13:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sanamoin | null | sanamoin/wav2vec2-base-timit-demo-google-colab | 1 | null | transformers | 32,590 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
victorlee071200/distilbert-base-cased-finetuned-squad | e2047f77f61817e74e01f97d6d7dfdd9b9f50543 | 2022-06-09T04:54:55.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | victorlee071200 | null | victorlee071200/distilbert-base-cased-finetuned-squad | 1 | null | transformers | 32,591 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2357 | 1.0 | 5546 | 1.1985 |
| 0.9525 | 2.0 | 11092 | 1.1285 |
| 0.744 | 3.0 | 16638 | 1.1755 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jmilic/model_name | 13ce63bfcc2895d9b9ac8ba4f7673bfe330d0441 | 2022-06-02T23:19:41.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | jmilic | null | jmilic/model_name | 1 | null | transformers | 32,592 | Entry not found |
huggingtweets/chewschaper | 5370fea7c36ea6571f46017e72e0d1bcbcf59d0b | 2022-06-02T23:07:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/chewschaper | 1 | null | transformers | 32,593 | ---
language: en
thumbnail: http://www.huggingtweets.com/chewschaper/1654211222982/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1443195119218343937/dNb48XD2_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Benjamin Schaper</div>
<div style="text-align: center; font-size: 14px;">@chewschaper</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Benjamin Schaper.
| Data | Benjamin Schaper |
| --- | --- |
| Tweets downloaded | 449 |
| Retweets | 106 |
| Short tweets | 17 |
| Tweets kept | 326 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2kzh1jag/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chewschaper's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/113fsajt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/113fsajt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chewschaper')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Splend1dchan/xtreme_s_xlsr_t5lephone-small_minds14.en-all | 9035164de2cef006e8b6ae985562121c05ed4845 | 2022-06-03T12:19:36.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"all",
"transformers",
"minds14",
"google/xtreme_s",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | null | false | Splend1dchan | null | Splend1dchan/xtreme_s_xlsr_t5lephone-small_minds14.en-all | 1 | null | transformers | 32,594 | ---
language:
- all
license: apache-2.0
tags:
- minds14
- google/xtreme_s
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xtreme_s_xlsr_t5lephone-small_minds14.en-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_t5lephone-small_minds14.en-all
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14.ALL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5979
- F1: 0.8918
- Accuracy: 0.8921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 150.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:------:|:--------:|
| 2.3561 | 2.98 | 200 | 2.5464 | 0.0681 | 0.1334 |
| 1.1851 | 5.97 | 400 | 1.5056 | 0.5583 | 0.5861 |
| 1.2805 | 8.95 | 600 | 1.1397 | 0.7106 | 0.7044 |
| 1.0801 | 11.94 | 800 | 0.9863 | 0.7132 | 0.7198 |
| 0.9285 | 14.92 | 1000 | 0.9912 | 0.7037 | 0.7139 |
| 0.4164 | 17.91 | 1200 | 0.8226 | 0.7743 | 0.7741 |
| 0.7669 | 20.89 | 1400 | 0.8131 | 0.7783 | 0.7788 |
| 0.4606 | 23.88 | 1600 | 0.8314 | 0.7879 | 0.7792 |
| 0.6975 | 26.86 | 1800 | 0.7667 | 0.7927 | 0.7939 |
| 0.9913 | 29.85 | 2000 | 0.9207 | 0.7734 | 0.7707 |
| 0.2307 | 32.83 | 2200 | 0.7651 | 0.8072 | 0.8086 |
| 0.1412 | 35.82 | 2400 | 0.7132 | 0.8352 | 0.8311 |
| 0.2141 | 38.8 | 2600 | 0.7551 | 0.8276 | 0.8262 |
| 0.2169 | 41.79 | 2800 | 0.7900 | 0.8148 | 0.8160 |
| 0.3942 | 44.77 | 3000 | 0.8621 | 0.8130 | 0.8042 |
| 0.2306 | 47.76 | 3200 | 0.6788 | 0.8264 | 0.8253 |
| 0.0975 | 50.74 | 3400 | 0.7236 | 0.8295 | 0.8289 |
| 0.0062 | 53.73 | 3600 | 0.6872 | 0.8286 | 0.8277 |
| 0.1781 | 56.71 | 3800 | 0.6990 | 0.8393 | 0.8390 |
| 0.0309 | 59.7 | 4000 | 0.6348 | 0.8496 | 0.8500 |
| 0.0026 | 62.68 | 4200 | 0.6737 | 0.8585 | 0.8566 |
| 0.0043 | 65.67 | 4400 | 0.7780 | 0.8416 | 0.8387 |
| 0.0032 | 68.65 | 4600 | 0.6899 | 0.8482 | 0.8461 |
| 0.0302 | 71.64 | 4800 | 0.6813 | 0.8515 | 0.8495 |
| 0.0027 | 74.62 | 5000 | 0.7163 | 0.8530 | 0.8529 |
| 0.1165 | 77.61 | 5200 | 0.6249 | 0.8603 | 0.8595 |
| 0.0021 | 80.59 | 5400 | 0.6747 | 0.8588 | 0.8578 |
| 0.2558 | 83.58 | 5600 | 0.7514 | 0.8581 | 0.8581 |
| 0.0162 | 86.57 | 5800 | 0.6782 | 0.8667 | 0.8664 |
| 0.1929 | 89.55 | 6000 | 0.6371 | 0.8615 | 0.8600 |
| 0.0621 | 92.54 | 6200 | 0.8079 | 0.8600 | 0.8607 |
| 0.0017 | 95.52 | 6400 | 0.7072 | 0.8678 | 0.8669 |
| 0.0008 | 98.51 | 6600 | 0.7323 | 0.8572 | 0.8541 |
| 0.1655 | 101.49 | 6800 | 0.6953 | 0.8521 | 0.8505 |
| 0.01 | 104.48 | 7000 | 0.7149 | 0.8665 | 0.8674 |
| 0.0135 | 107.46 | 7200 | 0.8990 | 0.8523 | 0.8488 |
| 0.0056 | 110.45 | 7400 | 0.7320 | 0.8673 | 0.8664 |
| 0.0023 | 113.43 | 7600 | 0.7108 | 0.8700 | 0.8705 |
| 0.0025 | 116.42 | 7800 | 0.6464 | 0.8818 | 0.8820 |
| 0.0003 | 119.4 | 8000 | 0.6985 | 0.8706 | 0.8713 |
| 0.0048 | 122.39 | 8200 | 0.6620 | 0.8765 | 0.8740 |
| 0.2335 | 125.37 | 8400 | 0.6515 | 0.8832 | 0.8828 |
| 0.0005 | 128.36 | 8600 | 0.6961 | 0.8776 | 0.8762 |
| 0.0003 | 131.34 | 8800 | 0.5990 | 0.8878 | 0.8882 |
| 0.0002 | 134.33 | 9000 | 0.6236 | 0.8887 | 0.8889 |
| 0.002 | 137.31 | 9200 | 0.6671 | 0.8847 | 0.8845 |
| 0.0002 | 140.3 | 9400 | 0.5970 | 0.8931 | 0.8935 |
| 0.0002 | 143.28 | 9600 | 0.6095 | 0.8906 | 0.8913 |
| 0.0002 | 146.27 | 9800 | 0.6056 | 0.8910 | 0.8913 |
| 0.0002 | 149.25 | 10000 | 0.5979 | 0.8918 | 0.8921 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Splend1dchan/wav2vec2-large-lv60_mt5-base_nofreeze_bs64 | 78c18cbd88adb12b7ed4996f910dce4c0bb6e621 | 2022-06-05T02:54:04.000Z | [
"pytorch",
"speechmix",
"transformers"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_mt5-base_nofreeze_bs64 | 1 | null | transformers | 32,595 | Entry not found |
PSW/samsum_reverse_train_distilbart_xsum_9-6_min40max2000_epoch3 | 3d5c083ab71fbaf9d40536636f348b9a501ffba4 | 2022-06-03T01:40:42.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min40max2000_epoch3 | 1 | null | transformers | 32,596 | Entry not found |
erickfm/t5-large-finetuned-bias-v3 | df1406fb83239ff3e4132ae0b09079df7914c9dc | 2022-06-03T02:24:50.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-large-finetuned-bias-v3 | 1 | null | transformers | 32,597 | Entry not found |
erickfm/t5-large-finetuned-bias-v4 | 51a1034d581b289d60f24cb70783373fe45e3f4a | 2022-06-03T03:55:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-large-finetuned-bias-v4 | 1 | null | transformers | 32,598 | Entry not found |
erickfm/t5-large-finetuned-bias-v5 | bf0bba4d47e435675d9669879eae4985431f1948 | 2022-06-03T06:13:36.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-large-finetuned-bias-v5 | 1 | null | transformers | 32,599 | Entry not found |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.