modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
202015004/wav2vec2-base-timit-demo-colab | b8137a46f661c0fd3a321c12fafe40adff8bc490 | 2022-02-21T03:49:39.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | 202015004 | null | 202015004/wav2vec2-base-timit-demo-colab | 2 | null | transformers | 22,800 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6259
- Wer: 0.3544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.6744 | 0.5 | 500 | 2.9473 | 1.0 |
| 1.4535 | 1.01 | 1000 | 0.7774 | 0.6254 |
| 0.7376 | 1.51 | 1500 | 0.6923 | 0.5712 |
| 0.5848 | 2.01 | 2000 | 0.5445 | 0.5023 |
| 0.4492 | 2.51 | 2500 | 0.5148 | 0.4958 |
| 0.4006 | 3.02 | 3000 | 0.5283 | 0.4781 |
| 0.3319 | 3.52 | 3500 | 0.5196 | 0.4628 |
| 0.3424 | 4.02 | 4000 | 0.5285 | 0.4551 |
| 0.2772 | 4.52 | 4500 | 0.5060 | 0.4532 |
| 0.2724 | 5.03 | 5000 | 0.5216 | 0.4422 |
| 0.2375 | 5.53 | 5500 | 0.5376 | 0.4443 |
| 0.2279 | 6.03 | 6000 | 0.6051 | 0.4308 |
| 0.2091 | 6.53 | 6500 | 0.5084 | 0.4423 |
| 0.2029 | 7.04 | 7000 | 0.5083 | 0.4242 |
| 0.1784 | 7.54 | 7500 | 0.6123 | 0.4297 |
| 0.1774 | 8.04 | 8000 | 0.5749 | 0.4339 |
| 0.1542 | 8.54 | 8500 | 0.5110 | 0.4033 |
| 0.1638 | 9.05 | 9000 | 0.6324 | 0.4318 |
| 0.1493 | 9.55 | 9500 | 0.6100 | 0.4152 |
| 0.1591 | 10.05 | 10000 | 0.5508 | 0.4022 |
| 0.1304 | 10.55 | 10500 | 0.5090 | 0.4054 |
| 0.1234 | 11.06 | 11000 | 0.6282 | 0.4093 |
| 0.1218 | 11.56 | 11500 | 0.5817 | 0.3941 |
| 0.121 | 12.06 | 12000 | 0.5741 | 0.3999 |
| 0.1073 | 12.56 | 12500 | 0.5818 | 0.4149 |
| 0.104 | 13.07 | 13000 | 0.6492 | 0.3953 |
| 0.0934 | 13.57 | 13500 | 0.5393 | 0.4083 |
| 0.0961 | 14.07 | 14000 | 0.5510 | 0.3919 |
| 0.0965 | 14.57 | 14500 | 0.5896 | 0.3992 |
| 0.0921 | 15.08 | 15000 | 0.5554 | 0.3947 |
| 0.0751 | 15.58 | 15500 | 0.6312 | 0.3934 |
| 0.0805 | 16.08 | 16000 | 0.6732 | 0.3948 |
| 0.0742 | 16.58 | 16500 | 0.5990 | 0.3884 |
| 0.0708 | 17.09 | 17000 | 0.6186 | 0.3869 |
| 0.0679 | 17.59 | 17500 | 0.5837 | 0.3848 |
| 0.072 | 18.09 | 18000 | 0.5831 | 0.3775 |
| 0.0597 | 18.59 | 18500 | 0.6562 | 0.3843 |
| 0.0612 | 19.1 | 19000 | 0.6298 | 0.3756 |
| 0.0514 | 19.6 | 19500 | 0.6746 | 0.3720 |
| 0.061 | 20.1 | 20000 | 0.6236 | 0.3788 |
| 0.054 | 20.6 | 20500 | 0.6012 | 0.3718 |
| 0.0521 | 21.11 | 21000 | 0.6053 | 0.3778 |
| 0.0494 | 21.61 | 21500 | 0.6154 | 0.3772 |
| 0.0468 | 22.11 | 22000 | 0.6052 | 0.3747 |
| 0.0413 | 22.61 | 22500 | 0.5877 | 0.3716 |
| 0.0424 | 23.12 | 23000 | 0.5786 | 0.3658 |
| 0.0403 | 23.62 | 23500 | 0.5828 | 0.3658 |
| 0.0391 | 24.12 | 24000 | 0.5913 | 0.3685 |
| 0.0312 | 24.62 | 24500 | 0.5850 | 0.3625 |
| 0.0316 | 25.13 | 25000 | 0.6029 | 0.3611 |
| 0.0282 | 25.63 | 25500 | 0.6312 | 0.3624 |
| 0.0328 | 26.13 | 26000 | 0.6312 | 0.3621 |
| 0.0258 | 26.63 | 26500 | 0.5891 | 0.3581 |
| 0.0256 | 27.14 | 27000 | 0.6259 | 0.3546 |
| 0.0255 | 27.64 | 27500 | 0.6315 | 0.3587 |
| 0.0249 | 28.14 | 28000 | 0.6547 | 0.3579 |
| 0.025 | 28.64 | 28500 | 0.6237 | 0.3565 |
| 0.0228 | 29.15 | 29000 | 0.6187 | 0.3559 |
| 0.0209 | 29.65 | 29500 | 0.6259 | 0.3544 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
AJ/DialoGPT-small-ricksanchez | 7b8045b6dfdccf9a10bcc70229a18acde13f91ff | 2021-09-27T00:10:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AJ | null | AJ/DialoGPT-small-ricksanchez | 2 | null | transformers | 22,801 | ---
tags:
- conversational
---
# Uses DialoGPT |
AKulk/wav2vec2-base-timit-epochs10 | 082d4d832d62f218c45f62f6ab1cf67cdd0ff7ed | 2022-02-14T12:49:09.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | AKulk | null | AKulk/wav2vec2-base-timit-epochs10 | 2 | null | transformers | 22,802 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-epochs10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-epochs10
This model is a fine-tuned version of [AKulk/wav2vec2-base-timit-epochs5](https://huggingface.co/AKulk/wav2vec2-base-timit-epochs5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
AT/distilroberta-base-finetuned-wikitext2 | 6640ec8b5927f44939410bdea44337f8db0d7e55 | 2022-01-19T08:22:36.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | AT | null | AT/distilroberta-base-finetuned-wikitext2 | 2 | null | transformers | 22,803 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
AVSilva/bertimbau-large-fine-tuned-md | 23de596a0b7fb907eb74fcc3e2a5195ff3e83912 | 2022-02-03T17:19:02.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | AVSilva | null | AVSilva/bertimbau-large-fine-tuned-md | 2 | null | transformers | 22,804 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: result
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AVSilva/bertimbau-large-fine-tuned-sd | 3659caf43a0ff417c814280813d2a1566c5bd515 | 2021-12-15T20:43:17.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | AVSilva | null | AVSilva/bertimbau-large-fine-tuned-sd | 2 | null | transformers | 22,805 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: result
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Aastha/wav2vec2-base-timit-demo-colab | 43177f07b31c9b65bcd34555b8379b1803395fd5 | 2022-01-22T15:04:16.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Aastha | null | Aastha/wav2vec2-base-timit-demo-colab | 2 | null | transformers | 22,806 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4812
- Wer: 0.3557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4668 | 4.0 | 500 | 1.3753 | 0.9895 |
| 0.6126 | 8.0 | 1000 | 0.4809 | 0.4350 |
| 0.2281 | 12.0 | 1500 | 0.4407 | 0.4033 |
| 0.1355 | 16.0 | 2000 | 0.4590 | 0.3765 |
| 0.0923 | 20.0 | 2500 | 0.4754 | 0.3707 |
| 0.0654 | 24.0 | 3000 | 0.4719 | 0.3557 |
| 0.0489 | 28.0 | 3500 | 0.4812 | 0.3557 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Abhishek4/Cuad_Finetune_roberta | b0da9f5eb4c652783b7e49ccc7cd1aaf4537be92 | 2022-02-13T23:18:24.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Abhishek4 | null | Abhishek4/Cuad_Finetune_roberta | 2 | null | transformers | 22,807 | Entry not found |
AccurateIsaiah/DialoGPT-small-sinclair | 0b303f1cd69976dedbbe716d82659e5283b22018 | 2021-11-23T00:58:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AccurateIsaiah | null | AccurateIsaiah/DialoGPT-small-sinclair | 2 | null | transformers | 22,808 | ---
tags:
- conversational
---
# Un Filtered brain upload of sinclair |
AdharshJolly/HarryPotterBot-Model | 02a71becf1e50023c82a77e08d709502c05c1338 | 2021-11-04T07:48:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AdharshJolly | null | AdharshJolly/HarryPotterBot-Model | 2 | null | transformers | 22,809 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
AethiQs-Max/AethiQs_GemBERT_bertje_50k | 7eac1013243a6e9225338e69bebc8b156b0591db | 2021-06-23T14:59:15.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | AethiQs-Max | null | AethiQs-Max/AethiQs_GemBERT_bertje_50k | 2 | null | transformers | 22,810 | Entry not found |
AethiQs-Max/aethiqs-base_bertje-data_rotterdam-epochs_30-epoch_30 | 83c63a5086c0ae8f4cdf8834d66351ce9e053534 | 2021-08-05T14:23:09.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | AethiQs-Max | null | AethiQs-Max/aethiqs-base_bertje-data_rotterdam-epochs_30-epoch_30 | 2 | null | transformers | 22,811 | Entry not found |
AethiQs-Max/s3-v1-20_epochs | 6be8b14979bd2bec7e10fd029baa7731925a0b20 | 2021-08-08T15:36:00.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | AethiQs-Max | null | AethiQs-Max/s3-v1-20_epochs | 2 | null | transformers | 22,812 | Entry not found |
AiPorter/DialoGPT-small-Back_to_the_future | c9d68d55cce14c41c64dec7d13e8745e20cdd2a3 | 2022-02-23T00:04:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AiPorter | null | AiPorter/DialoGPT-small-Back_to_the_future | 2 | null | transformers | 22,813 | ---
tags:
- conversational
---
# Back to the Future DialoGPT Model |
AidenGO/KDXF_Bert4MaskedLM | d20459254dd5af2d65a05a4e857b3e9f396db499 | 2021-08-02T11:27:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | AidenGO | null | AidenGO/KDXF_Bert4MaskedLM | 2 | null | transformers | 22,814 | Entry not found |
Akashpb13/Hausa_xlsr | ea90b9ea39c3996cf26982ee571414b828f8a2a9 | 2022-03-23T18:35:09.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ha",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Akashpb13 | null | Akashpb13/Hausa_xlsr | 2 | 1 | transformers | 22,815 | ---
language:
- ha
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- ha
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: Akashpb13/Hausa_xlsr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ha
metrics:
- name: Test WER
type: wer
value: 0.20614541257934219
- name: Test CER
type: cer
value: 0.04358048053214061
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ha
metrics:
- name: Test WER
type: wer
value: 0.20614541257934219
- name: Test CER
type: cer
value: 0.04358048053214061
---
# Akashpb13/Hausa_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
- Loss: 0.275118
- Wer: 0.329955
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Hausa train.tsv, dev.tsv, invalidated.tsv, reported.tsv and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 5.175900 | 2.750914 | 1.000000 |
| 1000 | 1.028700 | 0.338649 | 0.497999 |
| 1500 | 0.332200 | 0.246896 | 0.402241 |
| 2000 | 0.227300 | 0.239640 | 0.395839 |
| 2500 | 0.175000 | 0.239577 | 0.373966 |
| 3000 | 0.140400 | 0.243272 | 0.356095 |
| 3500 | 0.119200 | 0.263761 | 0.365164 |
| 4000 | 0.099300 | 0.265954 | 0.353428 |
| 4500 | 0.084400 | 0.276367 | 0.349693 |
| 5000 | 0.073700 | 0.282631 | 0.343825 |
| 5500 | 0.068000 | 0.282344 | 0.341158 |
| 6000 | 0.064500 | 0.281591 | 0.342491 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Hausa_xlsr --dataset mozilla-foundation/common_voice_8_0 --config ha --split test
```
|
Akashpb13/xlsr_hungarian_new | 8975b4bc38fa44d262dc6165b87f30ba56659809 | 2022-03-23T18:33:33.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hu",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Akashpb13 | null | Akashpb13/xlsr_hungarian_new | 2 | 1 | transformers | 22,816 | ---
language:
- hu
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- hu
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: Akashpb13/xlsr_hungarian_new
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: hu
metrics:
- name: Test WER
type: wer
value: 0.2851621517163838
- name: Test CER
type: cer
value: 0.06112982522287432
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: hu
metrics:
- name: Test WER
type: wer
value: 0.2851621517163838
- name: Test CER
type: cer
value: 0.06112982522287432
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: hu
metrics:
- name: Test WER
type: wer
value: 47.15
---
# Akashpb13/xlsr_hungarian_new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - hu dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other and dev datasets):
- Loss: 0.197464
- Wer: 0.330094
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice hungarian train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000095637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 16
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 4.785300 | 0.952295 | 0.796236 |
| 1000 | 0.535800 | 0.217474 | 0.381613 |
| 1500 | 0.258400 | 0.205524 | 0.345056 |
| 2000 | 0.202800 | 0.198680 | 0.336264 |
| 2500 | 0.182700 | 0.197464 | 0.330094 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/xlsr_hungarian_new --dataset mozilla-foundation/common_voice_8_0 --config hu --split test
```
|
Akashpb13/xlsr_maltese_wav2vec2 | 9aeea3e43cf8508d68de553d366c558a84745523 | 2021-07-05T14:09:58.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"mt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Akashpb13 | null | Akashpb13/xlsr_maltese_wav2vec2 | 2 | null | transformers | 22,817 | ---
language: mt
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Maltese by Akash PB
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mt
type: common_voice
args: {lang_id}
metrics:
- name: Test WER
type: wer
value: 29.42
---
# Wav2Vec2-Large-XLSR-53-Maltese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Maltese using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "Akashpb13/xlsr_maltese_wav2vec2"
device = "cuda"
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\)\\(\\*)]'
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "mt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Test Result**: 29.42 %
|
AlbertHSU/ChineseFoodBert | f07e72878cb15caf04eecdd983e41854ed3a90c4 | 2022-01-12T17:26:51.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AlbertHSU | null | AlbertHSU/ChineseFoodBert | 2 | 1 | transformers | 22,818 | Entry not found |
Aleksandar/distilbert-srb-base-cased-oscar | f8a9d22cbe771335e5cb55b182745085ced82044 | 2021-09-22T12:19:26.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | fill-mask | false | Aleksandar | null | Aleksandar/distilbert-srb-base-cased-oscar | 2 | null | transformers | 22,819 | ---
tags:
- generated_from_trainer
model_index:
- name: distilbert-srb-base-cased-oscar
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-srb-base-cased-oscar
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
Aleksandar/distilbert-srb-ner | 5d3c89f63aed4e52c2016b682c1fb329447fe8d0 | 2021-09-09T06:27:16.000Z | [
"pytorch",
"distilbert",
"token-classification",
"sr",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | token-classification | false | Aleksandar | null | Aleksandar/distilbert-srb-ner | 2 | null | transformers | 22,820 | ---
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
language:
- sr
model_index:
- name: distilbert-srb-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: sr
metric:
name: Accuracy
type: accuracy
value: 0.9576561462374611
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-srb-ner
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2972
- Precision: 0.8871
- Recall: 0.9100
- F1: 0.8984
- Accuracy: 0.9577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3818 | 1.0 | 625 | 0.2175 | 0.8175 | 0.8370 | 0.8272 | 0.9306 |
| 0.198 | 2.0 | 1250 | 0.1766 | 0.8551 | 0.8732 | 0.8640 | 0.9458 |
| 0.1423 | 3.0 | 1875 | 0.1702 | 0.8597 | 0.8763 | 0.8679 | 0.9473 |
| 0.079 | 4.0 | 2500 | 0.1774 | 0.8674 | 0.8875 | 0.8773 | 0.9515 |
| 0.0531 | 5.0 | 3125 | 0.2011 | 0.8688 | 0.8965 | 0.8825 | 0.9522 |
| 0.0429 | 6.0 | 3750 | 0.2082 | 0.8769 | 0.8970 | 0.8868 | 0.9538 |
| 0.032 | 7.0 | 4375 | 0.2268 | 0.8764 | 0.8916 | 0.8839 | 0.9528 |
| 0.0204 | 8.0 | 5000 | 0.2423 | 0.8726 | 0.8959 | 0.8841 | 0.9529 |
| 0.0148 | 9.0 | 5625 | 0.2522 | 0.8774 | 0.8991 | 0.8881 | 0.9538 |
| 0.0125 | 10.0 | 6250 | 0.2544 | 0.8823 | 0.9024 | 0.8922 | 0.9559 |
| 0.0108 | 11.0 | 6875 | 0.2592 | 0.8780 | 0.9041 | 0.8909 | 0.9553 |
| 0.007 | 12.0 | 7500 | 0.2672 | 0.8877 | 0.9056 | 0.8965 | 0.9571 |
| 0.0048 | 13.0 | 8125 | 0.2714 | 0.8879 | 0.9089 | 0.8982 | 0.9583 |
| 0.0049 | 14.0 | 8750 | 0.2872 | 0.8873 | 0.9068 | 0.8970 | 0.9573 |
| 0.0034 | 15.0 | 9375 | 0.2915 | 0.8883 | 0.9114 | 0.8997 | 0.9577 |
| 0.0027 | 16.0 | 10000 | 0.2890 | 0.8865 | 0.9103 | 0.8983 | 0.9581 |
| 0.0028 | 17.0 | 10625 | 0.2885 | 0.8877 | 0.9085 | 0.8980 | 0.9576 |
| 0.0014 | 18.0 | 11250 | 0.2928 | 0.8860 | 0.9073 | 0.8965 | 0.9577 |
| 0.0013 | 19.0 | 11875 | 0.2963 | 0.8856 | 0.9099 | 0.8976 | 0.9576 |
| 0.001 | 20.0 | 12500 | 0.2972 | 0.8871 | 0.9100 | 0.8984 | 0.9577 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
Aleksandar/electra-srb-ner-setimes | 45095f192dd8b8b054b7ebb3a63dc73a3b155a17 | 2021-09-22T12:19:32.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | token-classification | false | Aleksandar | null | Aleksandar/electra-srb-ner-setimes | 2 | null | transformers | 22,821 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: electra-srb-ner-setimes
results:
- task:
name: Token Classification
type: token-classification
metric:
name: Accuracy
type: accuracy
value: 0.9546789604788638
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-srb-ner-setimes
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2804
- Precision: 0.8286
- Recall: 0.8081
- F1: 0.8182
- Accuracy: 0.9547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 104 | 0.2981 | 0.6737 | 0.6113 | 0.6410 | 0.9174 |
| No log | 2.0 | 208 | 0.2355 | 0.7279 | 0.6701 | 0.6978 | 0.9307 |
| No log | 3.0 | 312 | 0.2079 | 0.7707 | 0.7062 | 0.7371 | 0.9402 |
| No log | 4.0 | 416 | 0.2078 | 0.7689 | 0.7479 | 0.7582 | 0.9449 |
| 0.2391 | 5.0 | 520 | 0.2089 | 0.8083 | 0.7476 | 0.7767 | 0.9484 |
| 0.2391 | 6.0 | 624 | 0.2199 | 0.7981 | 0.7726 | 0.7851 | 0.9487 |
| 0.2391 | 7.0 | 728 | 0.2528 | 0.8205 | 0.7749 | 0.7971 | 0.9511 |
| 0.2391 | 8.0 | 832 | 0.2265 | 0.8074 | 0.8003 | 0.8038 | 0.9524 |
| 0.2391 | 9.0 | 936 | 0.2843 | 0.8265 | 0.7716 | 0.7981 | 0.9504 |
| 0.0378 | 10.0 | 1040 | 0.2450 | 0.8024 | 0.8019 | 0.8021 | 0.9520 |
| 0.0378 | 11.0 | 1144 | 0.2550 | 0.8116 | 0.7986 | 0.8051 | 0.9519 |
| 0.0378 | 12.0 | 1248 | 0.2706 | 0.8208 | 0.7957 | 0.8081 | 0.9532 |
| 0.0378 | 13.0 | 1352 | 0.2664 | 0.8040 | 0.8035 | 0.8038 | 0.9530 |
| 0.0378 | 14.0 | 1456 | 0.2571 | 0.8011 | 0.8110 | 0.8060 | 0.9529 |
| 0.0099 | 15.0 | 1560 | 0.2673 | 0.8051 | 0.8129 | 0.8090 | 0.9534 |
| 0.0099 | 16.0 | 1664 | 0.2733 | 0.8074 | 0.8087 | 0.8081 | 0.9529 |
| 0.0099 | 17.0 | 1768 | 0.2835 | 0.8254 | 0.8074 | 0.8163 | 0.9543 |
| 0.0099 | 18.0 | 1872 | 0.2771 | 0.8222 | 0.8081 | 0.8151 | 0.9545 |
| 0.0099 | 19.0 | 1976 | 0.2776 | 0.8237 | 0.8084 | 0.8160 | 0.9546 |
| 0.0044 | 20.0 | 2080 | 0.2804 | 0.8286 | 0.8081 | 0.8182 | 0.9547 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
Aleksandar1932/gpt2-rock-124439808 | 13f190f5bea40c18cee91b1d5f512f362d74b0c1 | 2022-01-19T17:23:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Aleksandar1932 | null | Aleksandar1932/gpt2-rock-124439808 | 2 | null | transformers | 22,822 | Entry not found |
Aleksandar1932/gpt2-soul | 0e67ea4c10ffaa9cb84223f15c81652e41c6499a | 2022-03-18T23:53:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Aleksandar1932 | null | Aleksandar1932/gpt2-soul | 2 | null | transformers | 22,823 | Entry not found |
Aleksandar1932/gpt2-spanish-classics | aef9a3f1f3bd6cfd15121d49432abc0dceda4187 | 2022-03-19T00:07:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Aleksandar1932 | null | Aleksandar1932/gpt2-spanish-classics | 2 | null | transformers | 22,824 | Entry not found |
AllwynJ/HarryBoy | 51794f2dbc5aac3b32d62d0c067eec4527967a96 | 2021-09-07T17:42:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AllwynJ | null | AllwynJ/HarryBoy | 2 | null | transformers | 22,825 | ---
tags:
- conversational
---
#HarryBoy |
Amalq/roberta-base-finetuned-schizophreniaReddit2 | 4833404b92f95e3543b3b288ce290058a46c7f0c | 2021-12-20T05:41:28.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Amalq | null | Amalq/roberta-base-finetuned-schizophreniaReddit2 | 2 | null | transformers | 22,826 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-finetuned-schizophreniaReddit2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-schizophreniaReddit2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 490 | 1.8093 |
| 1.9343 | 2.0 | 980 | 1.7996 |
| 1.8856 | 3.0 | 1470 | 1.7966 |
| 1.8552 | 4.0 | 1960 | 1.7844 |
| 1.8267 | 5.0 | 2450 | 1.7839 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Amirosein/roberta | d2f41324b98cb1a9dd59f43f305c5e5c0ed0403e | 2021-09-06T13:41:15.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Amirosein | null | Amirosein/roberta | 2 | null | transformers | 22,827 | Entry not found |
AndrewMcDowell/wav2vec2-xls-r-1b-arabic | 1adf2a34e972a931e7f8ba8cf77c345a8f9d8626 | 2022-02-01T08:13:55.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | AndrewMcDowell | null | AndrewMcDowell/wav2vec2-xls-r-1b-arabic | 2 | null | transformers | 22,828 | ---
language:
- ar
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1373
- Wer: 0.8607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.2416 | 0.84 | 500 | 1.2867 | 0.8875 |
| 2.3089 | 1.67 | 1000 | 1.8336 | 0.9548 |
| 2.3614 | 2.51 | 1500 | 1.5937 | 0.9469 |
| 2.5234 | 3.35 | 2000 | 1.9765 | 0.9867 |
| 2.5373 | 4.19 | 2500 | 1.9062 | 0.9916 |
| 2.5703 | 5.03 | 3000 | 1.9772 | 0.9915 |
| 2.4656 | 5.86 | 3500 | 1.8083 | 0.9829 |
| 2.4339 | 6.7 | 4000 | 1.7548 | 0.9752 |
| 2.344 | 7.54 | 4500 | 1.6146 | 0.9638 |
| 2.2677 | 8.38 | 5000 | 1.5105 | 0.9499 |
| 2.2074 | 9.21 | 5500 | 1.4191 | 0.9357 |
| 2.3768 | 10.05 | 6000 | 1.6663 | 0.9665 |
| 2.3804 | 10.89 | 6500 | 1.6571 | 0.9720 |
| 2.3237 | 11.72 | 7000 | 1.6049 | 0.9637 |
| 2.317 | 12.56 | 7500 | 1.5875 | 0.9655 |
| 2.2988 | 13.4 | 8000 | 1.5357 | 0.9603 |
| 2.2906 | 14.24 | 8500 | 1.5637 | 0.9592 |
| 2.2848 | 15.08 | 9000 | 1.5326 | 0.9537 |
| 2.2381 | 15.91 | 9500 | 1.5631 | 0.9508 |
| 2.2072 | 16.75 | 10000 | 1.4565 | 0.9395 |
| 2.197 | 17.59 | 10500 | 1.4304 | 0.9406 |
| 2.198 | 18.43 | 11000 | 1.4230 | 0.9382 |
| 2.1668 | 19.26 | 11500 | 1.3998 | 0.9315 |
| 2.1498 | 20.1 | 12000 | 1.3920 | 0.9258 |
| 2.1244 | 20.94 | 12500 | 1.3584 | 0.9153 |
| 2.0953 | 21.78 | 13000 | 1.3274 | 0.9054 |
| 2.0762 | 22.61 | 13500 | 1.2933 | 0.9073 |
| 2.0587 | 23.45 | 14000 | 1.2516 | 0.8944 |
| 2.0363 | 24.29 | 14500 | 1.2214 | 0.8902 |
| 2.0302 | 25.13 | 15000 | 1.2087 | 0.8871 |
| 2.0071 | 25.96 | 15500 | 1.1953 | 0.8786 |
| 1.9882 | 26.8 | 16000 | 1.1738 | 0.8712 |
| 1.9772 | 27.64 | 16500 | 1.1647 | 0.8672 |
| 1.9585 | 28.48 | 17000 | 1.1459 | 0.8635 |
| 1.944 | 29.31 | 17500 | 1.1414 | 0.8616 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Andrija/SRoBERTa-base | cd6df96eece039e11e4dd726ead8c0b204af1d4d | 2021-08-09T19:41:34.000Z | [
"pytorch",
"roberta",
"fill-mask",
"hr",
"sr",
"dataset:oscar",
"dataset:leipzig",
"transformers",
"masked-lm",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Andrija | null | Andrija/SRoBERTa-base | 2 | null | transformers | 22,829 | ---
datasets:
- oscar
- leipzig
language:
- hr
- sr
tags:
- masked-lm
widget:
- text: "Ovo je početak <mask>."
license: apache-2.0
---
# Transformer language model for Croatian and Serbian
Trained on 3GB datasets that contain Croatian and Serbian language for two epochs.
Leipzig and OSCAR datasets
# Information of dataset
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `Andrija/SRoBERTa-base` | 80M | Second | Leipzig Corpus and OSCAR (3 GB of text) | |
AnonymousSub/AR_EManuals-BERT | ddf509723349598a7d6f23e9a69bd846a09c9bed | 2022-01-12T11:32:11.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_EManuals-BERT | 2 | null | transformers | 22,830 | Entry not found |
AnonymousSub/AR_EManuals-RoBERTa | 0993104955b4a40dc0d9be443af1338d74cd48fc | 2022-01-12T11:38:46.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_EManuals-RoBERTa | 2 | null | transformers | 22,831 | Entry not found |
AnonymousSub/AR_bert-base-uncased | 0d8cf9911bc0d76b7c396d5c86960122c3e406bd | 2022-01-12T11:46:29.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_bert-base-uncased | 2 | null | transformers | 22,832 | Entry not found |
AnonymousSub/AR_rule_based_bert_triplet_epochs_1_shard_1 | 9dc04a82bd49757cff625f995f559270b934e656 | 2022-01-10T23:12:46.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_bert_triplet_epochs_1_shard_1 | 2 | null | transformers | 22,833 | Entry not found |
AnonymousSub/AR_rule_based_hier_triplet_epochs_1_shard_1 | be8acb687c0ed327d53967e21e4cb3d06e2f575f | 2022-01-11T02:24:42.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_hier_triplet_epochs_1_shard_1 | 2 | null | transformers | 22,834 | Entry not found |
AnonymousSub/AR_rule_based_only_classfn_twostage_epochs_1_shard_1 | b2e118d50c6477837ff04cf24bc7a56132963a25 | 2022-01-11T01:50:15.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_only_classfn_twostage_epochs_1_shard_1 | 2 | null | transformers | 22,835 | Entry not found |
AnonymousSub/AR_rule_based_roberta_hier_quadruplet_epochs_1_shard_10 | ef5cb6c1fe3716300f906593a30c2433a750e51c | 2022-01-06T12:05:53.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_hier_quadruplet_epochs_1_shard_10 | 2 | null | transformers | 22,836 | Entry not found |
AnonymousSub/AR_rule_based_roberta_hier_triplet_epochs_1_shard_1 | c82650e0d18dd695908f02524930060526b3395c | 2022-01-06T20:10:47.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_hier_triplet_epochs_1_shard_1 | 2 | null | transformers | 22,837 | Entry not found |
AnonymousSub/AR_rule_based_roberta_only_classfn_epochs_1_shard_1 | abc1756db6bf5b934733f2858433a28e6306d97c | 2022-01-06T15:44:58.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_only_classfn_epochs_1_shard_1 | 2 | null | transformers | 22,838 | Entry not found |
AnonymousSub/AR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_10 | 57e13298de5b6499ca63b6480926fb65a61053e4 | 2022-01-06T10:54:54.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_10 | 2 | null | transformers | 22,839 | Entry not found |
AnonymousSub/AR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_10 | 48c9308751222922c85d6a823ae39165538deddb | 2022-01-06T07:52:06.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_10 | 2 | null | transformers | 22,840 | Entry not found |
AnonymousSub/AR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1 | 51f7a4624bd79446768cc65a4e066aaaeb0c7c9c | 2022-01-06T14:48:58.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1 | 2 | null | transformers | 22,841 | Entry not found |
AnonymousSub/AR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10 | 991d98bab02f9655f220b185d7043961f27ebb73 | 2022-01-06T15:08:33.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10 | 2 | null | transformers | 22,842 | Entry not found |
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_1 | 9791313320b7d60554f717f6f686b512e85188da | 2022-01-06T08:11:28.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_1 | 2 | null | transformers | 22,843 | Entry not found |
AnonymousSub/AR_rule_based_twostagetriplet_hier_epochs_1_shard_1 | 3f76c0570011cece9f9ba47f4473dcdac56751c4 | 2022-01-10T23:30:55.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_twostagetriplet_hier_epochs_1_shard_1 | 2 | null | transformers | 22,844 | Entry not found |
AnonymousSub/AR_specter | dee58272b4fd341de72883cae21af4fab60a8189 | 2022-01-12T12:16:39.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_specter | 2 | null | transformers | 22,845 | Entry not found |
AnonymousSub/EManuals_RoBERTa_squad2.0 | bc2d3c0ed0b0b73ff51817a95719741b323f6299 | 2022-01-17T19:10:32.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/EManuals_RoBERTa_squad2.0 | 2 | null | transformers | 22,846 | Entry not found |
AnonymousSub/SDR_HF_model_base | 37aa7a998e37ad0905e7e77a55f7fe95f3d80e47 | 2022-01-11T18:51:20.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SDR_HF_model_base | 2 | null | transformers | 22,847 | Entry not found |
AnonymousSub/SR_EManuals-BERT | 935bb2df79ac75b4099d57dea855c9d0aecd5e3a | 2022-01-12T11:28:14.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_EManuals-BERT | 2 | null | transformers | 22,848 | Entry not found |
AnonymousSub/SR_EManuals-RoBERTa | 549d2759791d0ec12428ed62ae7559808dc6fb0b | 2022-01-12T11:22:29.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_EManuals-RoBERTa | 2 | null | transformers | 22,849 | Entry not found |
AnonymousSub/SR_cline | ac3c263820e170d800066c06905e3a1a7f96cdcc | 2022-01-12T10:58:10.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_cline | 2 | null | transformers | 22,850 | Entry not found |
AnonymousSub/SR_rule_based_bert_triplet_epochs_1_shard_1 | acc52150c347369e6e84f57a8a2ff2b631298ca3 | 2022-01-11T00:20:20.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_bert_triplet_epochs_1_shard_1 | 2 | null | transformers | 22,851 | Entry not found |
AnonymousSub/SR_rule_based_hier_quadruplet_epochs_1_shard_1 | d5e2a8fbba2ba67e97b40c71df189d4dc83841a2 | 2022-01-10T23:25:51.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_hier_quadruplet_epochs_1_shard_1 | 2 | null | transformers | 22,852 | Entry not found |
AnonymousSub/SR_rule_based_only_classfn_epochs_1_shard_1 | 130e8ad57a63d490e2428a4eacbdceab5f23ea22 | 2022-01-11T01:50:18.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_only_classfn_epochs_1_shard_1 | 2 | null | transformers | 22,853 | Entry not found |
AnonymousSub/SR_rule_based_roberta_bert_quadruplet_epochs_1_shard_1 | d447c26a39f0d2a4e5b58d9f55b02ca1be5accac | 2022-01-12T09:30:25.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_bert_quadruplet_epochs_1_shard_1 | 2 | null | transformers | 22,854 | Entry not found |
AnonymousSub/SR_rule_based_roberta_bert_quadruplet_epochs_1_shard_10 | ba3b105bd0ea8a49dd2b633aa3b34bfdc1b4c03f | 2022-01-06T06:51:36.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_bert_quadruplet_epochs_1_shard_10 | 2 | null | transformers | 22,855 | Entry not found |
AnonymousSub/SR_rule_based_roberta_bert_triplet_epochs_1_shard_1 | 801b82969ab7d8344fb78850c25272582c0bd666 | 2022-01-06T08:03:18.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_bert_triplet_epochs_1_shard_1 | 2 | null | transformers | 22,856 | Entry not found |
AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_10 | 5e05246d3157a459ec6a45064291398f570fa661 | 2022-01-06T07:27:11.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_10 | 2 | null | transformers | 22,857 | Entry not found |
AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_1_wikiqa_copy | 3b5d2e94ae3f0bb5595678a9c6741a8cd1018088 | 2022-01-23T17:10:55.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_1_wikiqa_copy | 2 | null | transformers | 22,858 | Entry not found |
AnonymousSub/SR_rule_based_roberta_only_classfn_epochs_1_shard_10 | f4917ed6ad1d5418d5c31a1ce86ebb5222471927 | 2022-01-06T09:14:41.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_only_classfn_epochs_1_shard_10 | 2 | null | transformers | 22,859 | Entry not found |
AnonymousSub/SR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_1 | 99c148491404c6f84155690d36e4ac59e5eafa3c | 2022-01-06T07:07:00.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_1 | 2 | null | transformers | 22,860 | Entry not found |
AnonymousSub/SR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_10 | a0c2611d9de2418d1af804fcb9c269912811fc06 | 2022-01-06T08:02:52.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_10 | 2 | null | transformers | 22,861 | Entry not found |
AnonymousSub/SR_rule_based_roberta_twostagetriplet_epochs_1_shard_10 | d5f33dc871134a1b635f674b4aa180195be65fe2 | 2022-01-06T04:32:33.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_twostagetriplet_epochs_1_shard_10 | 2 | null | transformers | 22,862 | Entry not found |
AnonymousSub/SR_rule_based_twostage_quadruplet_epochs_1_shard_1 | 5723c772af6525230004454c6925f16c5e5b1b99 | 2022-01-10T21:37:49.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_twostage_quadruplet_epochs_1_shard_1 | 2 | null | transformers | 22,863 | Entry not found |
AnonymousSub/SR_rule_based_twostagetriplet_epochs_1_shard_1 | dc654f5766adb88baa7b0aa9d996ab459d0f08e5 | 2022-01-11T00:39:00.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_twostagetriplet_epochs_1_shard_1 | 2 | null | transformers | 22,864 | Entry not found |
AnonymousSub/bert-base-uncased_squad2.0 | 6cd360b4aa161f6157f2560fab7d84883b945f9c | 2022-01-17T15:39:30.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/bert-base-uncased_squad2.0 | 2 | null | transformers | 22,865 | Entry not found |
AnonymousSub/bert_triplet_epochs_1_shard_10 | e8176c8e1cf04eb168a0aa9503d8a9d9c5b35b99 | 2022-01-04T08:12:39.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/bert_triplet_epochs_1_shard_10 | 2 | null | transformers | 22,866 | Entry not found |
AnonymousSub/consert-techqa | f551c91ce7624437f5edeefb1ee747d1c1ef44dc | 2021-09-30T19:23:00.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/consert-techqa | 2 | null | transformers | 22,867 | Entry not found |
AnonymousSub/declutr-biomed-roberta-papers | cfb4134d89497360b22a501d28a4c5bc87bf76cb | 2021-10-25T03:42:14.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | AnonymousSub | null | AnonymousSub/declutr-biomed-roberta-papers | 2 | null | transformers | 22,868 | Entry not found |
AnonymousSub/declutr-model-emanuals | 808fbf96ea927b22b175c62f971c689fa54488dd | 2021-09-06T03:59:22.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | AnonymousSub | null | AnonymousSub/declutr-model-emanuals | 2 | null | transformers | 22,869 | Entry not found |
AnonymousSub/declutr-model_squad2.0 | aec762024df44e4abb5f7b87816fb8f0f24d35bd | 2022-01-17T19:40:34.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/declutr-model_squad2.0 | 2 | null | transformers | 22,870 | Entry not found |
AnonymousSub/declutr-roberta-papers | 70d7e9a01b7411be7c4f7b2be932a621c5931dea | 2021-10-22T13:10:04.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | AnonymousSub | null | AnonymousSub/declutr-roberta-papers | 2 | null | transformers | 22,871 | Entry not found |
AnonymousSub/roberta-base_squad2.0 | 2da16b47c45d5704d91de2180ceeba6e0734114f | 2022-01-17T18:14:36.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/roberta-base_squad2.0 | 2 | null | transformers | 22,872 | Entry not found |
AnonymousSub/rule_based_bert_hier_diff_equal_wts_epochs_1_shard_1 | c74fafbc3adae81e57476df9d0b429812054c58a | 2021-12-22T17:00:15.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_bert_hier_diff_equal_wts_epochs_1_shard_1 | 2 | null | transformers | 22,873 | Entry not found |
AnonymousSub/rule_based_bert_mean_diff_epochs_1_shard_1 | 46221712dd8fd9488cd71eea1cc5d79494a66e63 | 2021-12-22T16:57:35.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_bert_mean_diff_epochs_1_shard_1 | 2 | null | transformers | 22,874 | Entry not found |
AnonymousSub/rule_based_bert_mean_diff_epochs_1_shard_10 | 229b885af5c65b8beafb04265d8b788181d722ad | 2022-01-04T08:15:03.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_bert_mean_diff_epochs_1_shard_10 | 2 | null | transformers | 22,875 | Entry not found |
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_10 | 63ea19cc1b10fb36cbca410a869a66b4e90ec50c | 2022-01-04T08:17:09.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_10 | 2 | null | transformers | 22,876 | Entry not found |
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1 | eecb6fd986f3c7614a00d4ddea5caf554c23ce49 | 2021-12-22T16:58:21.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1 | 2 | null | transformers | 22,877 | Entry not found |
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_squad2.0 | 04b6df518ee8dc579440e51bb7f5044dbd580760 | 2022-01-17T20:30:13.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_squad2.0 | 2 | null | transformers | 22,878 | Entry not found |
AnonymousSub/rule_based_hier_quadruplet_0.1_epochs_1_shard_1_squad2.0 | 57ce714e5ebc4aae5a96724c1717a769ef8482a9 | 2022-01-19T01:03:21.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_hier_quadruplet_0.1_epochs_1_shard_1_squad2.0 | 2 | null | transformers | 22,879 | Entry not found |
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1 | af66789df045fddb48885c245435b7499d46807a | 2021-12-22T17:01:01.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1 | 2 | null | transformers | 22,880 | Entry not found |
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_squad2.0 | e6468b4dea88ff878d2b983203bf444d500f7ee9 | 2022-01-18T02:01:35.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_squad2.0 | 2 | null | transformers | 22,881 | Entry not found |
AnonymousSub/rule_based_only_classfn_epochs_1_shard_1 | 1b82288a8497e4dfb4d0e9a5089ca57eb24202ad | 2021-12-22T16:35:39.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_only_classfn_epochs_1_shard_1 | 2 | null | transformers | 22,882 | Entry not found |
AnonymousSub/rule_based_only_classfn_epochs_1_shard_10 | 37f2d68ce1df1820514080ca5c8d123d4905bb6d | 2022-01-04T08:25:50.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_only_classfn_epochs_1_shard_10 | 2 | null | transformers | 22,883 | Entry not found |
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1 | 25895e8e6bd7ac6216728e74275d78360666b158 | 2022-01-20T18:26:45.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1 | 2 | null | transformers | 22,884 | Entry not found |
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_10 | e38b56989ab5230317a630be7faf6e74a4dadc0f | 2022-01-04T22:07:58.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_10 | 2 | null | transformers | 22,885 | Entry not found |
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_10 | 371ed25e65dc53e674d1481c50f3e0578da91b39 | 2022-01-04T22:06:02.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_10 | 2 | null | transformers | 22,886 | Entry not found |
AnonymousSub/rule_based_roberta_hier_quadruplet_epochs_1_shard_10 | b10166236e81dc5223fb7f6f88469003d87f2411 | 2022-01-04T22:11:52.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_quadruplet_epochs_1_shard_10 | 2 | null | transformers | 22,887 | Entry not found |
AnonymousSub/rule_based_roberta_hier_quadruplet_epochs_1_shard_1_squad2.0 | 3d86b0e0d3b167e25ac0277cb6c7f6572a7cbf02 | 2022-01-20T20:46:45.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_quadruplet_epochs_1_shard_1_squad2.0 | 2 | null | transformers | 22,888 | Entry not found |
AnonymousSub/rule_based_roberta_hier_triplet_0.1_epochs_1_shard_1_squad2.0 | 94773ea8ed2e2e5f8ec3c5de4d2e089d47b4e239 | 2022-01-18T23:58:05.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_triplet_0.1_epochs_1_shard_1_squad2.0 | 2 | null | transformers | 22,889 | Entry not found |
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1 | 77f504fcb3750c3f22794f4d493637ae361835e6 | 2022-01-04T22:09:16.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1 | 2 | null | transformers | 22,890 | Entry not found |
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_10 | f10a22c41c45662d101e78e3bd0ab9a08b754f28 | 2022-01-04T22:10:08.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_10 | 2 | null | transformers | 22,891 | Entry not found |
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1_squad2.0 | c07d38b12947f098685a6d00903c7df27a279973 | 2022-01-18T00:19:10.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1_squad2.0 | 2 | null | transformers | 22,892 | Entry not found |
AnonymousSub/rule_based_roberta_only_classfn_epochs_1_shard_10 | 92e9fddd0336568641d3ab6bafa08208cf13d3c6 | 2022-01-04T22:04:09.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_only_classfn_epochs_1_shard_10 | 2 | null | transformers | 22,893 | Entry not found |
AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_1 | b86bd72552e9e2b9e59c43b1eedd9c7d3e9e314f | 2022-01-05T10:13:40.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_1 | 2 | null | transformers | 22,894 | Entry not found |
AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_10 | c11909ea74ec7f9d034bb21386a818656953f36c | 2022-01-05T10:15:02.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_10 | 2 | null | transformers | 22,895 | Entry not found |
AnonymousSub/rule_based_roberta_twostage_quadruplet_epochs_1_shard_10 | c2b6190351bc7ef548ee8430d2637dff19b99702 | 2022-01-05T10:18:05.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostage_quadruplet_epochs_1_shard_10 | 2 | null | transformers | 22,896 | Entry not found |
AnonymousSub/rule_based_roberta_twostage_quadruplet_epochs_1_shard_1_squad2.0 | 8721f3dcad887e106a917ff6759f7f15002f9318 | 2022-01-18T04:22:45.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostage_quadruplet_epochs_1_shard_1_squad2.0 | 2 | null | transformers | 22,897 | Entry not found |
AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10 | 90fb00f7ec583883aa0e96a24fe76c15f714fbd3 | 2022-01-05T10:20:47.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10 | 2 | null | transformers | 22,898 | Entry not found |
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1 | 620f1ee37e80cc057fe276859d02392032149dff | 2022-01-05T10:16:00.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1 | 2 | null | transformers | 22,899 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.