modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
persiannlp/mt5-large-parsinlu-arc-comqa-obqa-multiple-choice | 11bb178491c00702ce688de2bb472215512a2f11 | 2021-09-23T16:20:12.000Z | [
"pytorch",
"t5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:commonsenseqa",
"dataset:arc",
"dataset:openbookqa",
"transformers",
"multiple-choice",
"mt5",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-large-parsinlu-arc-comqa-obqa-multiple-choice | 2 | null | transformers | 24,600 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- commonsenseqa
- arc
- openbookqa
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-arc-comqa-obqa-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
pertschuk/0_RoBERTa | 066a634e4475f43b1895b261ae7205583f40c274 | 2020-04-15T23:33:48.000Z | [
"pytorch",
"transformers"
] | null | false | pertschuk | null | pertschuk/0_RoBERTa | 2 | null | transformers | 24,601 | Entry not found |
peter2000/xlm-roberta-base-finetuned-ecoicop | 4702cede7d86f3a88b4711b8d682437e0f118ddd | 2021-10-27T09:02:06.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | peter2000 | null | peter2000/xlm-roberta-base-finetuned-ecoicop | 2 | null | transformers | 24,602 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-ecoicop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-ecoicop
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1685
- Acc: 0.9659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4224 | 1.0 | 2577 | 0.3612 | 0.9132 |
| 0.2313 | 2.0 | 5154 | 0.2510 | 0.9441 |
| 0.1746 | 3.0 | 7731 | 0.1928 | 0.9569 |
| 0.1325 | 4.0 | 10308 | 0.1731 | 0.9640 |
| 0.0946 | 5.0 | 12885 | 0.1685 | 0.9659 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
phailyoor/distilbert-base-uncased-finetuned-yahd-2 | b69e108b3a795d80c7aa90946113338a6fe74b49 | 2021-11-10T20:24:46.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | phailyoor | null | phailyoor/distilbert-base-uncased-finetuned-yahd-2 | 2 | null | transformers | 24,603 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-yahd-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-yahd-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3850
- Accuracy: 0.2652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.2738 | 1.0 | 9556 | 2.2228 | 0.1996 |
| 1.9769 | 2.0 | 19112 | 2.1378 | 0.2321 |
| 1.6624 | 3.0 | 28668 | 2.1897 | 0.2489 |
| 1.3682 | 4.0 | 38224 | 2.2863 | 0.2538 |
| 1.1975 | 5.0 | 47780 | 2.3850 | 0.2652 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
phailyoor/distilbert-base-uncased-finetuned-yahd-twval | 452bf1458b355477cdc1608ef03715f76ced904c | 2021-11-14T19:41:47.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | phailyoor | null | phailyoor/distilbert-base-uncased-finetuned-yahd-twval | 2 | null | transformers | 24,604 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-yahd-twval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-yahd-twval
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2540
- Accuracy: 0.2664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.1967 | 1.0 | 10086 | 2.9662 | 0.2068 |
| 1.865 | 2.0 | 20172 | 2.9499 | 0.3229 |
| 1.5135 | 3.0 | 30258 | 3.3259 | 0.3036 |
| 1.2077 | 4.0 | 40344 | 3.8351 | 0.2902 |
| 1.0278 | 5.0 | 50430 | 4.2540 | 0.2664 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
phailyoor/distilbert-base-uncased-finetuned-yahd | e340ea58530b5849578f4a1606be14badf87fa55 | 2021-11-10T18:19:43.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | phailyoor | null | phailyoor/distilbert-base-uncased-finetuned-yahd | 2 | null | transformers | 24,605 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-yahd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-yahd
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7685
- Accuracy: 0.4010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 2.2439 | 1.0 | 9142 | 2.1898 | 0.2130 |
| 1.9235 | 2.0 | 18284 | 2.1045 | 0.2372 |
| 1.5915 | 3.0 | 27426 | 2.1380 | 0.2550 |
| 1.3262 | 4.0 | 36568 | 2.2544 | 0.2758 |
| 1.0529 | 5.0 | 45710 | 2.5662 | 0.2955 |
| 0.8495 | 6.0 | 54852 | 2.8731 | 0.3078 |
| 0.6779 | 7.0 | 63994 | 3.1980 | 0.3218 |
| 0.5546 | 8.0 | 73136 | 3.6289 | 0.3380 |
| 0.4738 | 9.0 | 82278 | 3.9732 | 0.3448 |
| 0.412 | 10.0 | 91420 | 4.2945 | 0.3565 |
| 0.3961 | 11.0 | 100562 | 4.6127 | 0.3772 |
| 0.3292 | 12.0 | 109704 | 4.9586 | 0.3805 |
| 0.318 | 13.0 | 118846 | 5.2615 | 0.3887 |
| 0.2936 | 14.0 | 127988 | 5.4567 | 0.3931 |
| 0.2671 | 15.0 | 137130 | 5.6902 | 0.3965 |
| 0.2301 | 16.0 | 146272 | 5.7685 | 0.4010 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
philippelaban/headline_grouping | e8120f3cbc07c11bc80fc1c9fa8514187cb13d59 | 2021-08-04T20:38:16.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | philippelaban | null | philippelaban/headline_grouping | 2 | 1 | transformers | 24,606 | Entry not found |
pi3ni0/pubmedqa-scibert-classical | f4a045a58b190b2c9b747199acf4291bf2c89d18 | 2021-05-20T02:37:34.000Z | [
"pytorch",
"jax",
"bert",
"pretraining",
"transformers"
] | null | false | pi3ni0 | null | pi3ni0/pubmedqa-scibert-classical | 2 | null | transformers | 24,607 | Entry not found |
pistachiocow/RoyTBenBot | a5b2815fd30ee6bc7bd399d41e62b3787e81934a | 2021-09-12T15:29:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | pistachiocow | null | pistachiocow/RoyTBenBot | 2 | null | transformers | 24,608 | Entry not found |
pmthangk09/bert-base-uncased-sst | d5e88f091a66a363255ca976b845e508ad64757e | 2021-05-20T02:49:40.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | pmthangk09 | null | pmthangk09/bert-base-uncased-sst | 2 | null | transformers | 24,609 | Entry not found |
prajjwal1/ctrl_discovery_10 | 0d521c97a705e6e3ea12d8557da3a8ba8ff5a367 | 2021-05-16T16:56:14.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_10 | 2 | null | transformers | 24,610 | Entry not found |
prajjwal1/ctrl_discovery_13 | 2b5a4716fef44d2cb7821fb865ca99ed2eb68ce1 | 2021-06-03T22:20:53.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_13 | 2 | null | transformers | 24,611 | Entry not found |
prajjwal1/ctrl_discovery_3 | cf6344aa71e060e67a9e4f038f30ef350fd652a1 | 2021-03-06T16:07:23.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_3 | 2 | null | transformers | 24,612 | Entry not found |
prajjwal1/ctrl_discovery_6 | 9facec54d3279b6194589b140a74329ba5a57679 | 2021-04-11T04:41:23.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_6 | 2 | null | transformers | 24,613 | Entry not found |
prajjwal1/ctrl_discovery_7 | f7672687e5b115d547237b507c0325882125a330 | 2021-04-25T18:47:46.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_7 | 2 | null | transformers | 24,614 | Entry not found |
prajjwal1/ctrl_discovery_8 | 3ef77b196954f8ff5e1d264b300b5019f108f54d | 2021-04-25T21:01:29.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_8 | 2 | null | transformers | 24,615 | Entry not found |
prajjwal1/ctrl_discovery_9 | c6b1b582c34d2fa145cb488e9cb4cb1473af4304 | 2021-05-16T16:34:38.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_9 | 2 | null | transformers | 24,616 | Entry not found |
prajjwal1/ctrl_discovery_flipped_1 | 2cc13302dc0fa2f7efcec940a6764ecd9c0b88f4 | 2021-03-03T16:03:04.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_flipped_1 | 2 | null | transformers | 24,617 | Entry not found |
prajjwal1/ctrl_discovery_flipped_3 | 269e50edf14a7cff030bc1ed17ca22f57e8b6b9b | 2021-03-30T18:44:22.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_flipped_3 | 2 | null | transformers | 24,618 | Entry not found |
prajjwal1/ctrl_discovery_flipped_4 | 5f3589229020ae3cc81c1e7297437b10e8f2f676 | 2021-03-30T19:14:49.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_flipped_4 | 2 | null | transformers | 24,619 | Entry not found |
prajjwal1/gpt2_xl_discovery | 738a15a54fa628f2be2ec7178bea10f77ed306de | 2021-08-10T01:03:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/gpt2_xl_discovery | 2 | null | transformers | 24,620 | Entry not found |
prajwalcr/poetry-anticipation_gpt2 | 3b83a5e63d13630a0a307527a0c6e6f1691440e2 | 2021-05-29T17:57:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | prajwalcr | null | prajwalcr/poetry-anticipation_gpt2 | 2 | null | transformers | 24,621 | Entry not found |
prajwalcr/poetry-joy_gpt2 | 7730bd18912cf09ab64c2f276f46775137094637 | 2021-08-03T06:54:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | prajwalcr | null | prajwalcr/poetry-joy_gpt2 | 2 | null | transformers | 24,622 | Entry not found |
princeton-nlp/datamux-mnli-10 | cb67af5abc076f15164377d734c074b133ad0830 | 2022-02-16T16:54:02.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-mnli-10 | 2 | null | transformers | 24,623 | Entry not found |
princeton-nlp/datamux-mnli-40 | 2dcaf05123f4c271c20230f16106734d19d4e1a9 | 2022-02-16T16:56:10.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-mnli-40 | 2 | null | transformers | 24,624 | Entry not found |
princeton-nlp/datamux-retrieval-10 | c6b1a0d908e4933525d2823724cf1131ac815238 | 2022-02-18T03:53:09.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-retrieval-10 | 2 | null | transformers | 24,625 | Entry not found |
princeton-nlp/datamux-retrieval-40 | 97553f2a1ad99f179668030cc153cd21f3f9cf83 | 2022-02-18T03:56:23.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-retrieval-40 | 2 | null | transformers | 24,626 | Entry not found |
princeton-nlp/densephrases-multi-query-sqd | a4d826ab92fdab03af5c5401a91e7aec47e7b0f8 | 2021-09-20T21:49:34.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/densephrases-multi-query-sqd | 2 | null | transformers | 24,627 | Entry not found |
princeton-nlp/densephrases-multi-query-wq | cbf2a28206b510fe00b799506561b8a92f4abe9f | 2021-09-20T21:39:19.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/densephrases-multi-query-wq | 2 | null | transformers | 24,628 | Entry not found |
proxyht/mdsister | 7ff0d93d9b75d6a7d1e3033686baf1aec5a17a3c | 2021-06-29T08:01:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | proxyht | null | proxyht/mdsister | 2 | 1 | transformers | 24,629 | Entry not found |
proycon/robbert2-ner-cased-sonar1-nld | af155036374c5afc5216173fb8d1502211bce320 | 2021-05-20T19:43:40.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | proycon | null | proycon/robbert2-ner-cased-sonar1-nld | 2 | null | transformers | 24,630 | Entry not found |
pszemraj/wavlm-large-timit-100epoch | 7e342a32a6e0ffe5310b0ae84e78196e4bfd74ce | 2021-12-29T18:11:32.000Z | [
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | pszemraj | null | pszemraj/wavlm-large-timit-100epoch | 2 | null | transformers | 24,631 | ---
tags:
- generated_from_trainer
model-index:
- name: timit-demo-wavlm-large
---
# timit-demo-wavlm-large
This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on the [Timit dataset](https://huggingface.co/datasets/timit_asr).
It achieves the following results on the evaluation set:
- Loss: 0.3784
- Wer: 0.2746
## Model description
Fine tunes `microsoft/wavlm-large` on the [Timit dataset](https://huggingface.co/datasets/timit_asr) for 100 epochs to see results / compare to wav2vec2.
## Intended uses & limitations
This should be used primarily for benchmarking / comparison purposes, the Timit dataset **does not** generalize well as you will quickly see from testing inference API.
## Training and evaluation data
[Timit](https://huggingface.co/datasets/timit_asr) using standard splits.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.2656 | 4.0 | 500 | 2.9768 | 1.0 |
| 1.8004 | 8.0 | 1000 | 0.6151 | 0.6046 |
| 0.5425 | 12.0 | 1500 | 0.3802 | 0.4330 |
| 0.2647 | 16.0 | 2000 | 0.3015 | 0.3587 |
| 0.1697 | 20.0 | 2500 | 0.3225 | 0.3439 |
| 0.1164 | 24.0 | 3000 | 0.3162 | 0.3277 |
| 0.0951 | 28.0 | 3500 | 0.3102 | 0.3098 |
| 0.076 | 32.0 | 4000 | 0.3201 | 0.3052 |
| 0.0647 | 36.0 | 4500 | 0.3346 | 0.2990 |
| 0.0544 | 40.0 | 5000 | 0.3323 | 0.2955 |
| 0.0515 | 44.0 | 5500 | 0.3377 | 0.2898 |
| 0.045 | 48.0 | 6000 | 0.3268 | 0.2881 |
| 0.0393 | 52.0 | 6500 | 0.3404 | 0.2822 |
| 0.0364 | 56.0 | 7000 | 0.3337 | 0.2805 |
| 0.0329 | 60.0 | 7500 | 0.3485 | 0.2823 |
| 0.0327 | 64.0 | 8000 | 0.3362 | 0.2795 |
| 0.0287 | 68.0 | 8500 | 0.3768 | 0.2845 |
| 0.0284 | 72.0 | 9000 | 0.3736 | 0.2805 |
| 0.0292 | 76.0 | 9500 | 0.3761 | 0.2806 |
| 0.0251 | 80.0 | 10000 | 0.3735 | 0.2768 |
| 0.0224 | 84.0 | 10500 | 0.3741 | 0.2773 |
| 0.0232 | 88.0 | 11000 | 0.3760 | 0.2772 |
| 0.0213 | 92.0 | 11500 | 0.3729 | 0.2740 |
| 0.0204 | 96.0 | 12000 | 0.3722 | 0.2739 |
| 0.0199 | 100.0 | 12500 | 0.3784 | 0.2746 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
rafaelm47labs/spanishnews-classification | 688cd461c5623c3851401d2ffa34c1743a833c50 | 2021-09-02T10:06:33.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | rafaelm47labs | null | rafaelm47labs/spanishnews-classification | 2 | null | transformers | 24,632 | |
ragarwal/args-me-crossencoder-v1 | 82616e6c86e034c95e168d085589d54c5d63d4e5 | 2021-05-20T19:47:10.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | ragarwal | null | ragarwal/args-me-crossencoder-v1 | 2 | null | transformers | 24,633 | Entry not found |
ragarwal/args-me-roberta-base | 11603ec3fc73c17a67e26baacb848fb46499429e | 2021-05-20T19:48:38.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ragarwal | null | ragarwal/args-me-roberta-base | 2 | null | transformers | 24,634 | modelhub test
|
rajeshradhakrishnan/malayalam-wiki2021-BERTo | 0858f3def56e859fe7aec3e386cdd44c1546185e | 2021-11-08T18:02:58.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | rajeshradhakrishnan | null | rajeshradhakrishnan/malayalam-wiki2021-BERTo | 2 | null | transformers | 24,635 | Entry not found |
rajivratn/gupshup_e2e_pegasus | 25239020d2f33fc9d7d40291a51278d50c30eda1 | 2021-11-06T17:56:51.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | rajivratn | null | rajivratn/gupshup_e2e_pegasus | 2 | null | transformers | 24,636 | Entry not found |
ran/c9 | c4c949c5f5e24894353458d48d7053f7c079a6a3 | 2021-05-20T03:55:20.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ran | null | ran/c9 | 2 | null | transformers | 24,637 | Entry not found |
ravirajoshi/wav2vec2-large-xls-r-300m-hindi | 19d1ee64b6fbb25cfa53decc026d5ff95460ac91 | 2022-03-24T11:56:00.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ravirajoshi | null | ravirajoshi/wav2vec2-large-xls-r-300m-hindi | 2 | null | transformers | 24,638 | ---
language:
- hi
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
model-index:
- name: wav2vec2-large-xls-r-300m-hindi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7049
- Wer: 0.3200
|
remotejob/tweetsDISTILGPT2fi_v3 | e1e490eca4c8600692002b577a32d63c80e59bc7 | 2021-11-05T07:05:34.000Z | [
"pytorch",
"rust",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | remotejob | null | remotejob/tweetsDISTILGPT2fi_v3 | 2 | null | transformers | 24,639 | Entry not found |
remotejob/tweetsGPT2fi_v1 | 21562d194537eb3bb3a5cb8028cd340a2d0bb351 | 2021-06-12T16:40:33.000Z | [
"pytorch",
"rust",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | remotejob | null | remotejob/tweetsGPT2fi_v1 | 2 | null | transformers | 24,640 | Entry not found |
researchaccount/continue_mlm | 50afb2416b745b9bcbcf3a58ba5f3ec745ea20c3 | 2021-05-20T04:18:46.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | researchaccount | null | researchaccount/continue_mlm | 2 | null | transformers | 24,641 | Entry not found |
researchaccount/sa_sub3 | 6b170e6b699b92c44b8d7840ae40e68b62af754c | 2021-05-20T04:23:04.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"transformers"
] | text-classification | false | researchaccount | null | researchaccount/sa_sub3 | 2 | null | transformers | 24,642 | ---
language: en
widget:
- text: "USER USER USER USER لاحول ولاقوه الا بالله 💔 💔 💔 💔 HASH TAG متي يصدر قرار العشرين ! ! ! ! ! !"
---
Sub 3 |
reza/xlm-roberta-base-finetuned-marc-en | 34dc5c15f5314e93f7fb7dc02beeee58947615f4 | 2021-10-22T13:15:30.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | reza | null | reza/xlm-roberta-base-finetuned-marc-en | 2 | null | transformers | 24,643 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9569
- Mae: 0.5244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1386 | 1.0 | 235 | 1.0403 | 0.5122 |
| 0.9591 | 2.0 | 470 | 0.9569 | 0.5244 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
rfulton/my_model | 0579c641707d1043c7492a445a9cbc616d5d803e | 2021-08-23T20:59:25.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | rfulton | null | rfulton/my_model | 2 | null | transformers | 24,644 | Entry not found |
rg089/distilbart-summarization | 9654473ec640cd8df14e04f976f23cfc23265a38 | 2021-11-27T19:10:50.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | rg089 | null | rg089/distilbart-summarization | 2 | null | transformers | 24,645 | Entry not found |
ricardo-filho/BERT-pt-institutional-corpus-v.1 | d6443b74923d87abde908ddfb31ccc58636e6202 | 2021-07-27T22:29:33.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ricardo-filho | null | ricardo-filho/BERT-pt-institutional-corpus-v.1 | 2 | null | transformers | 24,646 | Entry not found |
ricardo-filho/bertimbau_base_snli_mnrl | e5df684f474ebdc3c63e5b0e8aa43a1e8d8ce207 | 2021-08-09T21:01:02.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ricardo-filho | null | ricardo-filho/bertimbau_base_snli_mnrl | 2 | null | sentence-transformers | 24,647 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 4059 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 405,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 406,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ricardo-filho/sbertimbau-large-allnli-mnrl | f2c8dfe06374382e56a40dad7a53985b758a611e | 2021-08-12T19:44:32.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ricardo-filho | null | ricardo-filho/sbertimbau-large-allnli-mnrl | 2 | 1 | sentence-transformers | 24,648 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 16133 with parameters:
```
{'batch_size': 32}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 1613,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1614,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ristekcsui/bert-base-hs | ccd841a0eb02b67e5a02b74fc791f5a0c71ef5f6 | 2022-01-31T04:03:56.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | ristekcsui | null | ristekcsui/bert-base-hs | 2 | null | transformers | 24,649 | Entry not found |
rkmt/repo | 6b334850385495bb54acea66ad112623b8dc2d9d | 2022-02-13T12:32:08.000Z | [
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | rkmt | null | rkmt/repo | 2 | null | transformers | 24,650 | Entry not found |
rndlr96/EnBERT_BCE | 4213315c27badfbb2962fa0554aed5a385379067 | 2021-05-20T04:29:02.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | rndlr96 | null | rndlr96/EnBERT_BCE | 2 | null | transformers | 24,651 | Entry not found |
rndlr96/label256 | 7359c6225665a5d8c39decda39410fe729e40ddb | 2021-05-20T04:31:56.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | rndlr96 | null | rndlr96/label256 | 2 | null | transformers | 24,652 | Entry not found |
rossanez/t5-base-finetuned-de-en | 95a00bcf34ba4499acc700c66ef027f73623cf15 | 2021-12-01T10:55:50.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt14",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rossanez | null | rossanez/t5-base-finetuned-de-en | 2 | null | transformers | 24,653 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-base-finetuned-de-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-de-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.4324 | 1.2308 | 17.8904 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-256-lr2e-4 | 1b2376b915881c6ecc8a51c0dcffe065bb7281e9 | 2021-12-01T00:40:20.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt14",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rossanez | null | rossanez/t5-small-finetuned-de-en-256-lr2e-4 | 2 | null | transformers | 24,654 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-small-finetuned-de-en-256-lr2e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-256-lr2e-4
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.1169 | 7.6948 | 17.4103 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-256-nofp16 | 462cead30aae95920be494e74265142f1af02ea9 | 2021-12-01T00:54:59.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt14",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rossanez | null | rossanez/t5-small-finetuned-de-en-256-nofp16 | 2 | null | transformers | 24,655 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-small-finetuned-de-en-256-nofp16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-256-nofp16
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.1234 | 7.7305 | 17.4033 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rsedlr/RickBotExample | 751dc3aa2835d00a92c8164c180886b210d47c3e | 2021-08-09T15:51:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rsedlr | null | rsedlr/RickBotExample | 2 | null | transformers | 24,656 | ---
tags:
- conversational
---
# RickBot built for [Chai](https://chai.ml/)
Make your own [here](https://colab.research.google.com/drive/1o5LxBspm-C28HQvXN-PRQavapDbm5WjG?usp=sharing)
|
rywerth/Rupi-or-Not-Rupi | 58c0c361408a5c4459cdaba1cca7d4aeef68a969 | 2021-05-23T12:18:29.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | rywerth | null | rywerth/Rupi-or-Not-Rupi | 2 | null | transformers | 24,657 | hello
|
s3h/arabert-classification | e259873a646468042e626ada13071d74c182be11 | 2022-01-01T12:18:15.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | s3h | null | s3h/arabert-classification | 2 | null | transformers | 24,658 | Entry not found |
saattrupdan/xlmr-base-texas-squad-es | 2b5b2d132fd0c58bf0ecd7095a47d521766aa3cf | 2022-03-18T16:51:52.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | saattrupdan | null | saattrupdan/xlmr-base-texas-squad-es | 2 | null | transformers | 24,659 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlmr-base-texas-squad-es
results: []
widget:
- text: "¿Quién invitó a Raísa Gorbachova a tomar una copa?"
context: "Las tapas han llegado a convertirse en una señal de identidad española y son ofrecidas en los banquetes de recepción a los más altos dignatarios (en los denominados tapas meeting). Así, durante la Conferencia de Paz de Madrid la Reina Sofía y el alcalde de Madrid José María Álvarez del Manzano invitaron a Raísa Gorbachova a una bebida con tapa durante su visita a la capital española. En la modernidad existen bares que ofrecen especialidades de tapas y a este fenómeno se le ha denominado cocina en miniatura. No obstante, el concepto de tapa ha sido llevado a la alta cocina por el cocinero Ferran Adrià que los emplea como entradas."
---
# TExAS-SQuAD-es
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the TExAS-SQuAD-es dataset.
It achieves the following results on the evaluation set:
- Exact match: xx.xx%
- F1-score: xx.xx%
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.0645 | 0.24 | 1000 | 1.7915 |
| 1.8458 | 0.47 | 2000 | 1.7873 |
| 1.8208 | 0.71 | 3000 | 1.6628 |
| 1.7743 | 0.95 | 4000 | 1.5684 |
| 1.5636 | 1.18 | 5000 | 1.5686 |
| 1.6017 | 1.42 | 6000 | 1.5484 |
| 1.6271 | 1.66 | 7000 | 1.5173 |
| 1.5975 | 1.89 | 8000 | 1.5209 |
| 1.477 | 2.13 | 9000 | 1.5766 |
| 1.4389 | 2.37 | 10000 | 1.5392 |
| 1.3389 | 2.6 | 11000 | 1.5298 |
| 1.437 | 2.84 | 12000 | 1.5504 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.8.1+cu101
- Datasets 1.12.1
- Tokenizers 0.10.3 |
saburbutt/albert_xxlarge_tweetqa_v2 | 3aa1c0a44a17faac7d337b8c1420e6d145b5ca65 | 2021-04-13T22:36:46.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saburbutt | null | saburbutt/albert_xxlarge_tweetqa_v2 | 2 | null | transformers | 24,660 | |
sagar/pretrained-FinBERT | 71d73dec4251bf0e3912c5c1b0d431f3edccbda2 | 2021-01-04T04:34:18.000Z | [
"pytorch",
"transformers"
] | null | false | sagar | null | sagar/pretrained-FinBERT | 2 | null | transformers | 24,661 | FinBert Pretrained model to be used with downstream tasks |
sagittariusA/media_bias_classifier_cs | 4dff5cdc1ffa0898aed43f8756ea5ceacb007a0b | 2022-01-07T21:15:38.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | sagittariusA | null | sagittariusA/media_bias_classifier_cs | 2 | null | transformers | 24,662 | Entry not found |
sagteam/covid-twitter-xlm-roberta-large | 28782639dde040b06460d8b5bc09b9525f3e7c34 | 2022-07-27T11:41:43.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:1911.02116",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sagteam | null | sagteam/covid-twitter-xlm-roberta-large | 2 | null | transformers | 24,663 | # COVID-twitter-XLM-Roberta-large
## Model description
This is a model based on the [XLM-RoBERTa large](https://huggingface.co/xlm-roberta-large) topology (provided by Facebook, see original [paper](https://arxiv.org/abs/1911.02116)) with additional training on a corpus of unmarked tweets.
For more details, please see, our [GitHub repository](https://github.com/sag111/COVID-19-tweets-Russia).
## Training data
We formed a corpus of unlabeled twitter messages.
The data on keyword "covid" was expanded with texts containing other words often occurred in hashtags on the Covid-19 pandemic: "covid", "stayhome", and "coronavirus" (hereinafter, these are translations of Russian words into English).
Separately, messages were collected from Twitter users from large regions of Russia. The search was provided using different word forms of 58 manually selected keywords on Russian related to the topic of coronavirus infection (including: "PCR", "pandemic", "self-isolation", etc.).
The unlabeled corpus includes all unique Russian-language tweets from the collected data (>1M tweets). Since modern language models are usually multilingual, about 1M more tweets in other languages were added to this corpus using filtering procedures described above. Thus, in the unlabeled part of the collected data, there were about 2 million messages.
### BibTeX entry and citation info
Our GitHub repository: https://github.com/sag111/COVID-19-tweets-Russia
If you have found our results helpful in your work, feel free to cite our publication and this repository as:
```
@article{sboev2021russian,
title={The Russian language corpus and a neural network to analyse Internet tweet reports about Covid-19},
author={Sboev, Alexander and Moloshnikov, Ivan and Naumov, Alexander and Levochkina𝑎, Anastasia and Rybka𝑎, Roman},
year={2021}
}
```
|
sagteam/xlm-roberta-large-sag | ed249e398098a3717c94e8e24d4130bb6f54a5d1 | 2021-11-24T18:19:22.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"multilingual",
"arxiv:1911.02116",
"arxiv:2004.03659",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | sagteam | null | sagteam/xlm-roberta-large-sag | 2 | 1 | transformers | 24,664 | ---
language: multilingual
thumbnail: "url to a thumbnail used in social sharing"
tags: exbert
license: apache-2.0
---
# XLM-RoBERTa-large-sag
## Model description
This is a model based on the [XLM-RoBERTa large](https://huggingface.co/xlm-roberta-large) topology (provided by Facebook, see original [paper](https://arxiv.org/abs/1911.02116)) with additional training on two sets of medicine-domain texts:
* about 250.000 text reviews on medicines (1000-tokens-long in average) collected from the site irecommend.ru;
* the raw part of the [RuDReC corpus](https://github.com/cimm-kzn/RuDReC) (about 1.4 million texts, see [paper](https://arxiv.org/abs/2004.03659)).
The XLM-RoBERTa-large calculations for one epoch on this data were performed using one Nvidia Tesla v100 and the Huggingface Transformers library.
## BibTeX entry and citation info
If you have found our results helpful in your work, feel free to cite our publication as:
```
@article{sboev2021analysis,
title={An analysis of full-size Russian complexly NER labelled corpus of Internet user reviews on the drugs based on deep learning and language neural nets},
author={Sboev, Alexander and Sboeva, Sanna and Moloshnikov, Ivan and Gryaznov, Artem and Rybka, Roman and Naumov, Alexander and Selivanov, Anton and Rylkov, Gleb and Ilyin, Viacheslav},
journal={arXiv preprint arXiv:2105.00059},
year={2021}
}
``` |
saibo/random-bert-base-uncased | ac861c74b23a9821aea308267fa3ca4283782811 | 2021-07-29T14:36:42.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | saibo | null | saibo/random-bert-base-uncased | 2 | null | transformers | 24,665 | Entry not found |
saichandrapandraju/t5_base_tabqgen | cecc1ea44263bc1ad8aa533f75bea9c23ed86d54 | 2021-06-23T14:04:10.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | saichandrapandraju | null | saichandrapandraju/t5_base_tabqgen | 2 | null | transformers | 24,666 | Entry not found |
sam890914/autonlp-roberta-large2-479012819 | 520da5554581303c82e6511374f476ac2ff62fe9 | 2022-01-06T08:46:51.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:sam890914/autonlp-data-roberta-large2",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | sam890914 | null | sam890914/autonlp-roberta-large2-479012819 | 2 | null | transformers | 24,667 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- sam890914/autonlp-data-roberta-large2
co2_eq_emissions: 71.60954851696604
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 479012819
- CO2 Emissions (in grams): 71.60954851696604
## Validation Metrics
- Loss: 0.22774338722229004
- Accuracy: 0.9395126938149599
- Precision: 0.9677075940383251
- Recall: 0.9117352056168505
- AUC: 0.9862377263827619
- F1: 0.9388879325185058
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/sam890914/autonlp-roberta-large2-479012819
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("sam890914/autonlp-roberta-large2-479012819", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("sam890914/autonlp-roberta-large2-479012819", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
samkphd31/ASRS-CMFS | 83e2c4fb773adbae379bae26ab709a3d7ca5232c | 2021-11-17T14:36:13.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | samkphd31 | null | samkphd31/ASRS-CMFS | 2 | null | transformers | 24,668 | Entry not found |
sammy786/wav2vec2-xlsr-interlingua | e8b38c0848fd7afdc3aaa01775dcfec6d8315180 | 2022-03-24T11:56:13.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ia",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-xlsr-interlingua | 2 | null | transformers | 24,669 | ---
language:
- ia
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- ia
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-interlingua
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ia
metrics:
- name: Test WER
type: wer
value: 16.81
- name: Test CER
type: cer
value: 4.76
---
# sammy786/wav2vec2-xlsr-interlingua
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ia dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 5.44
- Wer: 19.78
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 4.649200 | 0.483339 | 0.511322 |
| 400 | 0.764700 | 0.133428 | 0.251288 |
| 600 | 0.563700 | 0.099292 | 0.227745 |
| 800 | 0.438800 | 0.087545 | 0.217445 |
| 1000 | 0.406800 | 0.072313 | 0.213848 |
| 1200 | 0.237500 | 0.066965 | 0.213766 |
| 1400 | 0.177800 | 0.064419 | 0.208126 |
| 1600 | 0.157100 | 0.065962 | 0.214011 |
| 1800 | 0.146600 | 0.059477 | 0.202076 |
| 2000 | 0.132800 | 0.055015 | 0.201831 |
| 2200 | 0.122000 | 0.055421 | 0.201749 |
| 2400 | 0.115700 | 0.054462 | 0.197826 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-interlingua --dataset mozilla-foundation/common_voice_8_0 --config ia --split test
``` |
sammy786/wav2vec2-xlsr-romansh_sursilvan | 7f80a82062614bc9347256a9209471334c6c294a | 2022-03-24T11:58:43.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"rm-sursilv",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-xlsr-romansh_sursilvan | 2 | null | transformers | 24,670 | ---
language:
- rm-sursilv
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- rm-sursilv
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-romansh_sursilvan
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: rm-sursilv
metrics:
- name: Test WER
type: wer
value: 13.82
- name: Test CER
type: cer
value: 3.02
---
# sammy786/wav2vec2-xlsr-romansh_sursilvan
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - rm-sursilv dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 16.38
- Wer: 21.25
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 4.825500 | 2.932350 | 1.000000 |
| 400 | 1.325600 | 0.292645 | 0.415436 |
| 600 | 0.709800 | 0.219167 | 0.324451 |
| 800 | 0.576800 | 0.174390 | 0.275477 |
| 1000 | 0.538100 | 0.183737 | 0.272116 |
| 1200 | 0.475200 | 0.159078 | 0.253871 |
| 1400 | 0.420400 | 0.167277 | 0.240907 |
| 1600 | 0.393500 | 0.167216 | 0.247269 |
| 1800 | 0.407500 | 0.178282 | 0.239827 |
| 2000 | 0.374400 | 0.184590 | 0.239467 |
| 2200 | 0.382600 | 0.164106 | 0.227824 |
| 2400 | 0.363100 | 0.162543 | 0.228544 |
| 2600 | 0.199000 | 0.172903 | 0.231665 |
| 2800 | 0.150800 | 0.160117 | 0.222662 |
| 3000 | 0.101100 | 0.169553 | 0.222662 |
| 3200 | 0.104200 | 0.161056 | 0.220622 |
| 3400 | 0.096900 | 0.161562 | 0.216781 |
| 3600 | 0.092200 | 0.163880 | 0.212580 |
| 3800 | 0.089200 | 0.162288 | 0.214140 |
| 4000 | 0.076200 | 0.160470 | 0.213540 |
| 4200 | 0.087900 | 0.162827 | 0.213060 |
| 4400 | 0.066200 | 0.161096 | 0.213300 |
| 4600 | 0.076000 | 0.162060 | 0.213660 |
| 4800 | 0.071400 | 0.162045 | 0.213300 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-romansh_sursilvan --dataset mozilla-foundation/common_voice_8_0 --config rm-sursilv --split test
``` |
sana-ngu/Hat5-Roberta | 9acae6280268216ffc5df811c7c529713f9cdc73 | 2022-02-09T04:26:50.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | sana-ngu | null | sana-ngu/Hat5-Roberta | 2 | null | transformers | 24,671 | Entry not found |
sanayAI/output | 5f199eae8d4b4e54b7c5982f2d3f95680b711ead | 2021-05-20T04:41:38.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sanayAI | null | sanayAI/output | 2 | null | transformers | 24,672 | Entry not found |
sancharidan/scibet_expertfinder | 5c47620f6a8ff65b4d59989a87b263e23c44ea37 | 2021-07-18T06:51:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | sancharidan | null | sancharidan/scibet_expertfinder | 2 | null | transformers | 24,673 | Entry not found |
sanchit-gandhi/wav2vec2-2-bart-large-frozen-enc | 2e4c59ae933be93a9b8e370de9f835f82941d80e | 2022-02-22T15:43:21.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-bart-large-frozen-enc | 2 | null | transformers | 24,674 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3123
- Wer: 0.0908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.4937 | 0.28 | 500 | 5.2312 | 0.9660 |
| 3.821 | 0.56 | 1000 | 4.5810 | 0.9066 |
| 1.2129 | 0.84 | 1500 | 1.3723 | 0.3928 |
| 0.6575 | 1.12 | 2000 | 0.6645 | 0.1810 |
| 0.489 | 1.4 | 2500 | 0.5523 | 0.1479 |
| 0.3541 | 1.68 | 3000 | 0.4585 | 0.1195 |
| 0.3573 | 1.96 | 3500 | 0.3859 | 0.1066 |
| 0.2437 | 2.24 | 4000 | 0.3747 | 0.1015 |
| 0.1406 | 2.52 | 4500 | 0.3346 | 0.0952 |
| 0.1468 | 2.8 | 5000 | 0.3123 | 0.0908 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sangrimlee/mt5-small-e2e-qg | ff7a0b28907eb6e09f39d251e219a40ae085cc68 | 2021-06-23T16:34:09.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sangrimlee | null | sangrimlee/mt5-small-e2e-qg | 2 | null | transformers | 24,675 | Entry not found |
sangrimlee/mt5-small-qg-hl | 15489773d297075acd948273f54de29312e4fe18 | 2021-06-23T16:36:01.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sangrimlee | null | sangrimlee/mt5-small-qg-hl | 2 | null | transformers | 24,676 | Entry not found |
sankhajay/mt5-base-sinaha-qa | aa1d99b7150eb7bbf963d493adee978888916566 | 2022-01-27T05:35:18.000Z | [
"pytorch",
"t5",
"text2text-generation",
"si",
"transformers",
"question-answering",
"Sinhala",
"autotrain_compatible"
] | question-answering | false | sankhajay | null | sankhajay/mt5-base-sinaha-qa | 2 | null | transformers | 24,677 | \n
---
language: si
tags:
- question-answering
- Sinhala
widget:
- context: "ශ්රී ලංකාව යනු ඉන්දියානු සාගරයේ පිහිටි මනරම් දුපතකි."
text: "ශ්රී ලංකාව පිහිටා ඇත්තේ කොහෙද ?"
---
# mt5-base-sinhala-qa
This is an mt5-based Question Answering model for the Sinhalese language. Training is done on translated SQuAD dataset of 8k questions. The translation was done by google translate API.
The training was done on Google Colab TPU environment with parallel training techniques. The training was done on around 9k data points which consists of context, question, answer trios for the Sinhala language. Evaluation is done using standard SQuAD evaluation script on around 1k data points which gave following results on the best parameter setting. Evaluation matrices used are EM matric and F1 score matric.
Evaluation - {'EM': 39.413680781758956, 'f1': 66.16331104953571} |
santhoshkolloju/ans_gen2 | 923df92a5877b292c9abd255bc6933490175159d | 2021-06-23T14:07:52.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | santhoshkolloju | null | santhoshkolloju/ans_gen2 | 2 | null | transformers | 24,678 | Entry not found |
santhoshkolloju/t5_qg_multi3 | 157b3328f256b33e4b6d4b4a3e010435abe7b81f | 2021-06-23T14:10:18.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | santhoshkolloju | null | santhoshkolloju/t5_qg_multi3 | 2 | null | transformers | 24,679 | Entry not found |
saraks/cuad-distil-document_name-08-25 | 27657914558539e500421b0a214efa9d5f2c1ed7 | 2021-08-25T10:39:43.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-document_name-08-25 | 2 | null | transformers | 24,680 | Entry not found |
saraks/cuad-distil-governing_law-08-25 | e4767e1fa19411ac235fe86de30143e4b16d0a34 | 2021-08-25T16:29:52.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-governing_law-08-25 | 2 | null | transformers | 24,681 | Entry not found |
saraks/cuad-distil-parties-08-25 | 4290aab4166e5c1c32cb21ca76c155d5bead7839 | 2021-08-25T10:32:00.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-parties-08-25 | 2 | null | transformers | 24,682 | Entry not found |
saraks/cuad-distil-parties-cased-08-31-v1 | b4f517909117c036ba272392ecc9e649dd6cd1b3 | 2021-08-31T16:36:18.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-parties-cased-08-31-v1 | 2 | null | transformers | 24,683 | Entry not found |
saraks/cuad-distil-parties-dates-law-08-18-id-question1 | 1d37116bc1e02b32d7b38ce69116002466ab529e | 2021-08-18T17:49:38.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-parties-dates-law-08-18-id-question1 | 2 | null | transformers | 24,684 | Entry not found |
satishjasthij/cola | 578ac3ee70e0993d84268155e7a8c2f23c07ff6b | 2022-02-24T05:59:13.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | satishjasthij | null | satishjasthij/cola | 2 | null | transformers | 24,685 | Entry not found |
scasutt/Prototype_training | 883a6b51b6cc06d624eb29f01ce0d71d96c1109d | 2022-01-04T14:59:34.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | scasutt | null | scasutt/Prototype_training | 2 | null | transformers | 24,686 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Prototype_training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Prototype_training
This model is a fine-tuned version of [scasutt/Prototype_training](https://huggingface.co/scasutt/Prototype_training) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3719
- Wer: 0.4626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3853 | 1.47 | 100 | 0.3719 | 0.4626 |
| 0.3867 | 2.94 | 200 | 0.3719 | 0.4626 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
scasutt/Prototype_training_large_model | fdc5615888fe5d74b938f3bed98cdad3b54fab91 | 2021-12-30T14:40:39.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | scasutt | null | scasutt/Prototype_training_large_model | 2 | null | transformers | 24,687 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Prototype_training_large_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Prototype_training_large_model
This model is a fine-tuned version of [scasutt/Prototype_training_large_model](https://huggingface.co/scasutt/Prototype_training_large_model) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2585
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.0545 | 1.47 | 100 | 3.2604 | 1.0 |
| 3.0413 | 2.93 | 200 | 3.2585 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
sdadas/polish-roberta-base-v1 | 2ce118eda32d6b81cf06be5d3d1b831ecf85322d | 2022-02-19T10:01:21.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"license:lgpl-3.0",
"autotrain_compatible"
] | fill-mask | false | sdadas | null | sdadas/polish-roberta-base-v1 | 2 | null | transformers | 24,688 | ---
license: lgpl-3.0
---
|
seanbethard/autonlp-summarization_model-8771942 | 284ea1a366a8ca13e9f9ac4f26fa45c8996a4094 | 2021-08-26T19:34:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:seanbethard/autonlp-data-summarization_model",
"transformers",
"autonlp",
"autotrain_compatible"
] | text2text-generation | false | seanbethard | null | seanbethard/autonlp-summarization_model-8771942 | 2 | null | transformers | 24,689 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- seanbethard/autonlp-data-summarization_model
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 8771942
## Validation Metrics
- Loss: 0.7463301420211792
- Rouge1: 19.9454
- Rouge2: 13.0362
- RougeL: 17.5797
- RougeLsum: 17.7459
- Gen Len: 19.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/seanbethard/autonlp-summarization_model-8771942
``` |
sebaverde/bertitude-ita-tweets | 92b0fff26077e4621499e76fb20633021ba32849 | 2021-05-20T05:09:11.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | sebaverde | null | sebaverde/bertitude-ita-tweets | 2 | null | transformers | 24,690 | Entry not found |
seduerr/lang_det | 51aa5add46ef8ff2b8e5f1219639f72a4e9b8ef2 | 2021-06-23T14:11:46.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/lang_det | 2 | null | transformers | 24,691 | Entry not found |
seduerr/pai_exin | a99ecb147976d2278c53ef6898db14fbc0b0a44b | 2021-07-08T08:46:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/pai_exin | 2 | null | transformers | 24,692 | Entry not found |
seduerr/pai_formtrans | 21b506dd3f45b30aa67ddbb2fe7457f9a533fabc | 2021-06-23T14:14:21.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/pai_formtrans | 2 | null | transformers | 24,693 | Entry not found |
seduerr/pai_fuser_short | 9e4a2d404fe126bef6b573a0de5c6a6ff114ba9d | 2021-05-01T13:43:18.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/pai_fuser_short | 2 | null | transformers | 24,694 | Entry not found |
seduerr/pai_m2f | d6dffa2a977bc262d7d2e7c6d3a0a02de5974acf | 2021-06-23T14:14:58.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/pai_m2f | 2 | null | transformers | 24,695 | Entry not found |
seduerr/pai_paraph | 9c3030ffe757506dad899777100598440a29ae66 | 2021-06-08T08:43:43.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/pai_paraph | 2 | null | transformers | 24,696 | input_ = paraphrase: + str(input_) + ' </s>'
|
sello-ralethe/bert-base-generics-mlm | d94730c31ffa81fdf1c6f2ff1d823b87ecf7de7f | 2021-05-20T05:12:57.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sello-ralethe | null | sello-ralethe/bert-base-generics-mlm | 2 | null | transformers | 24,697 | Entry not found |
serenay/autonlp-Emotion-14722565 | 88c6ada300f8b9b1571f59682393fab5ea53e351 | 2021-10-04T08:49:20.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:serenay/autonlp-data-Emotion",
"transformers",
"autonlp"
] | text-classification | false | serenay | null | serenay/autonlp-Emotion-14722565 | 2 | null | transformers | 24,698 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- serenay/autonlp-data-Emotion
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 14722565
## Validation Metrics
- Loss: 0.6077525615692139
- Accuracy: 0.7745398773006135
- Macro F1: 0.7287152925396537
- Micro F1: 0.7745398773006135
- Weighted F1: 0.7754701717098939
- Macro Precision: 0.7282186282186283
- Micro Precision: 0.7745398773006135
- Weighted Precision: 0.7787550922520248
- Macro Recall: 0.7314173610899214
- Micro Recall: 0.7745398773006135
- Weighted Recall: 0.7745398773006135
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/serenay/autonlp-Emotion-14722565
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("serenay/autonlp-Emotion-14722565", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("serenay/autonlp-Emotion-14722565", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
sergiyvl/just_first_try_to_my_diplom_onBert | 3a18c52a42c6ad0433d163ba80f52e677c968647 | 2021-05-20T05:37:44.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sergiyvl | null | sergiyvl/just_first_try_to_my_diplom_onBert | 2 | null | transformers | 24,699 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.