modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
anas-awadalla/spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-10 | e1ceedfa951df0d8f187dcaf4f790dd6c68cfbbb | 2022-02-25T21:57:46.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-10 | 2 | null | transformers | 25,000 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-10
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-0 | e1fd14b9584645a16e498c3e0103207b448f42d4 | 2022-02-25T22:13:00.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-0 | 2 | null | transformers | 25,001 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-0
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-4 | 482ae785c05c6432fad650ec4ca3a315e543bb0b | 2022-02-25T22:43:24.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-4 | 2 | null | transformers | 25,002 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-4
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-10 | 49f67379b563ffc66eb8cabf02660a57f7673a33 | 2022-02-25T23:29:09.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-10 | 2 | null | transformers | 25,003 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-10
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-6 | b61ea1b45497c6ad359fe54f5b69021ae5c0768b | 2022-02-26T04:38:59.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-6 | 2 | null | transformers | 25,004 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-6
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-10 | 6a232d73710560a3bdffc8fbfb046f5da0a289ce | 2022-02-26T05:09:28.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-10 | 2 | null | transformers | 25,005 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-10
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-6 | bf970c345f8cb94c26c01521b1e9e2c4e78f4c10 | 2022-02-26T06:07:51.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-6 | 2 | null | transformers | 25,006 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-6
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-8 | 74c1a6c1138a3893b05b52eee06ce28e48e70123 | 2022-02-26T06:21:42.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-8 | 2 | null | transformers | 25,007 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-8
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-10 | 35336bbef5989f88a390614216345bca96e226a1 | 2022-02-26T06:36:19.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-10 | 2 | null | transformers | 25,008 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-10
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-0 | 8e6ebc5d9540043085a575456a0056e5751efc7c | 2022-02-26T06:51:47.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-0 | 2 | null | transformers | 25,009 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-0
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-2 | 272a54f457eda5f2d434ef41df062d40658b2e94 | 2022-02-26T07:07:11.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-2 | 2 | null | transformers | 25,010 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-2
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-4 | 210ecf98963c50da90df7798ad43c590856503a7 | 2022-02-26T07:22:34.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-4 | 2 | null | transformers | 25,011 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-4
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-10 | e7fea6a07e1a190cd7ab2b37810d34559d5fd220 | 2022-02-26T08:08:44.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-10 | 2 | null | transformers | 25,012 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-10
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-4 | 39319a478bf9b662e61dca3b26adc6716bc6a650 | 2022-02-26T08:59:53.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-4 | 2 | null | transformers | 25,013 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-4
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
cyl/bitfit_t5-3b_cola | 45dce910d53af3ef6feab0ff658e2d67c3cf9723 | 2022-02-26T18:53:54.000Z | [
"pytorch",
"transformers"
] | null | false | cyl | null | cyl/bitfit_t5-3b_cola | 2 | null | transformers | 25,014 | Entry not found |
Daryaflp/roberta-retrained_ru_covid | 1fedc1684c0fde7a229571d7913185b41a54ce0d | 2022-02-27T16:18:22.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Daryaflp | null | Daryaflp/roberta-retrained_ru_covid | 2 | null | transformers | 25,015 | ---
tags:
- generated_from_trainer
model-index:
- name: roberta-retrained_ru_covid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-retrained_ru_covid
This model is a fine-tuned version of [blinoff/roberta-base-russian-v0](https://huggingface.co/blinoff/roberta-base-russian-v0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Hallzy/Peterbot | 5b71700f504933b2d6a928e4692a016d8f92f99f | 2022-02-26T23:33:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Hallzy | null | Hallzy/Peterbot | 2 | null | transformers | 25,016 | ---
tags:
- conversational
---
# Peter from Your Boyfriend Game.
|
patrickvonplaten/wav2vec2-base-es-voxpopuli-v2 | 89d794100f6fced00f7d1ba8a46b40785f97156a | 2022-02-27T00:31:53.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"transformers",
"correct"
] | null | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-base-es-voxpopuli-v2 | 2 | null | transformers | 25,017 | ---
tags:
- correct
---
Test
|
abhinema/gpt-medium | 4a20a1b2de79f2825c929a524a04e9e3c8cb5dab | 2022-03-04T04:21:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | abhinema | null | abhinema/gpt-medium | 2 | null | transformers | 25,018 | Entry not found |
Camzure/MaamiBot-test | de202135f518fa58007c086d18dae7bda5516cd2 | 2022-02-27T12:40:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Camzure | null | Camzure/MaamiBot-test | 2 | null | transformers | 25,019 | ---
tags:
- conversational
---
# MaamiBot |
maretamasaeva/roberta-finetuned-freeform | f0d0fd75935057cb37c61852fc8e16ed0725515f | 2022-03-29T14:19:27.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | maretamasaeva | null | maretamasaeva/roberta-finetuned-freeform | 2 | null | transformers | 25,020 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-finetuned-freeform
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-freeform
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6989
- Accuracy: 0.4668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6919 | 1.0 | 8094 | 0.6910 | 0.4668 |
| 0.6912 | 2.0 | 16188 | 0.6934 | 0.4668 |
| 0.6904 | 3.0 | 24282 | 0.6976 | 0.4668 |
| 0.6918 | 4.0 | 32376 | 0.6989 | 0.4668 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
kazandaev/opus-mt-en-ru-finetuned_v2 | 148730949a774fc76a4564b58c07a46e8aab70f3 | 2022-03-04T14:25:54.000Z | [
"pytorch",
"tensorboard",
"rust",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | kazandaev | null | kazandaev/opus-mt-en-ru-finetuned_v2 | 2 | null | transformers | 25,021 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-ru-finetuned_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ru-finetuned_v2
This model is a fine-tuned version of [kazandaev/opus-mt-en-ru-finetuned_v2](https://huggingface.co/kazandaev/opus-mt-en-ru-finetuned_v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8471
- Bleu: 37.5148
- Gen Len: 29.8495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 49
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 0.7688 | 1.0 | 50906 | 0.8533 | 37.1941 | 29.8644 |
| 0.764 | 2.0 | 101812 | 0.8504 | 37.1506 | 29.8481 |
| 0.7637 | 3.0 | 152718 | 0.8485 | 37.3499 | 29.7743 |
| 0.7593 | 4.0 | 203624 | 0.8477 | 37.4428 | 29.8165 |
| 0.7579 | 5.0 | 254530 | 0.8471 | 37.5148 | 29.8495 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
mipatov/rut5_nb_descr | 1a56358f35bbfc3b3cddb9f2d4091086aeec8c78 | 2022-02-27T23:43:38.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mipatov | null | mipatov/rut5_nb_descr | 2 | null | transformers | 25,022 | based on `sberbank-ai/ruT5-large`
finetuned for generate text description for notebook-devices |
danny911kr/tapas_simsiam_mlm_2 | 1d9b75a0a248494944afb1ff9d1f787d1e33aa13 | 2022-02-28T03:10:18.000Z | [
"pytorch",
"tapas",
"feature-extraction",
"transformers"
] | feature-extraction | false | danny911kr | null | danny911kr/tapas_simsiam_mlm_2 | 2 | null | transformers | 25,023 | Entry not found |
junnyu/flashquad_small_wwm_cluecorpussmall | 8f38e97e2ff46203bd77775439d4e4f377c39321 | 2022-02-28T03:41:06.000Z | [
"pytorch",
"flash_quad",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | junnyu | null | junnyu/flashquad_small_wwm_cluecorpussmall | 2 | null | transformers | 25,024 | Entry not found |
neal49/distilbert-sst2-freeze | fa2a47c03c88e8b5a1146a93994a882b941089fe | 2022-02-28T23:19:44.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | neal49 | null | neal49/distilbert-sst2-freeze | 2 | null | transformers | 25,025 | Entry not found |
facebook/wav2vec2-base-ro-voxpopuli-v2 | 48eb760786462f824757a5237f5968359c979795 | 2022-02-27T13:12:40.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"ro",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-ro-voxpopuli-v2 | 2 | null | transformers | 25,026 | ---
language: ro
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **ro** on **17.9k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **ro**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-base-it-voxpopuli-v2 | ca53d26733f609622fc37999ddfd4c832257d5c4 | 2022-02-27T13:12:17.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"it",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-it-voxpopuli-v2 | 2 | null | transformers | 25,027 | ---
language: it
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **it** on **21.9k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **it**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-base-de-voxpopuli-v2 | e0b603594e0d27db511346c91f7602a7b8db03a3 | 2022-02-27T13:13:15.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"de",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-de-voxpopuli-v2 | 2 | null | transformers | 25,028 | ---
language: de
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **de** on **23.2k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **de**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-base-sk-voxpopuli-v2 | 3a32a0746ade1ad6c6ab9071a85ca68cb48f7339 | 2022-02-27T13:14:37.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"sk",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-sk-voxpopuli-v2 | 2 | null | transformers | 25,029 | ---
language: sk
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **sk** on **12.1k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **sk**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-base-et-voxpopuli-v2 | 06e29dd8ae82fa8d2c632a0d44b3fff6719caf50 | 2022-02-27T13:14:58.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"et",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-et-voxpopuli-v2 | 2 | null | transformers | 25,030 | ---
language: et
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **et** on **10.6k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **et**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-base-lv-voxpopuli-v2 | 66d92c48e1ea737a638de13916dba5148b4a968e | 2022-02-27T13:15:26.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"lv",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-lv-voxpopuli-v2 | 2 | null | transformers | 25,031 | ---
language: lv
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **lv** on **13.1k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **lv**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-large-uralic-voxpopuli-v2 | fec5ee0ce1419a5c13161d159fb2ad01253fbcb0 | 2022-02-27T12:43:18.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"uralic",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-uralic-voxpopuli-v2 | 2 | null | transformers | 25,032 | ---
language: uralic
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-large-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained only in **uralic** on **42.5** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **uralic**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-base-hu-voxpopuli-v2 | 6d5f81b1b6f255b3858c09bbfe4edc6d2dfa34db | 2022-02-27T13:15:17.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"hu",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-hu-voxpopuli-v2 | 2 | null | transformers | 25,033 | ---
language: hu
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **hu** on **17.7k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **hu**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-large-romance-voxpopuli-v2 | f80097ef4718f536951758ff56603fa1057f010e | 2022-02-27T12:32:07.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"romance",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-romance-voxpopuli-v2 | 2 | null | transformers | 25,034 | ---
language: romance
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-large-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained only in **romance** on **101.5** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **romance**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
niksmer/ManiBERT | 0a00871eda1f3756ba1d3e8f8f7d5e758413974b | 2022-03-24T09:03:13.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:mit",
"model-index"
] | text-classification | false | niksmer | null | niksmer/ManiBERT | 2 | null | transformers | 25,035 | ---
license: mit
metrics:
- accuracy
- precision
- recall
model-index:
- name: ManiBERT
results: []
widget:
- text: "Russia must end the war."
- text: "Democratic institutions must be supported."
- text: "The state must fight political corruption."
- text: "Our energy economy must be nationalised."
- text: "We must increase social spending."
---
# ManiBERT
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on data from the [Manifesto Project](https://manifesto-project.wzb.eu/).
## Model description
This model was trained on 115,943 manually annotated sentences to classify text into one of 56 political categories:
## Intended uses & limitations
The model output reproduces the limitations of the dataset in terms of country coverage, time span, domain definitions and potential biases of the annotators - as any supervised machine learning model would. Applying the model to other types of data (other types of texts, countries etc.) will reduce performance.
```python
from transformers import pipeline
import pandas as pd
classifier = pipeline(
task="text-classification",
model="niksmer/ManiBERT")
# Load text data you want to classify
text = pd.read_csv("example.csv")["text_you_want_to_classify"].to_list()
# Inference
output = classifier(text)
# Print output
pd.DataFrame(output).head()
```
## Train Data
ManiBERT was trained on the English-speaking subset of the [Manifesto Project Dataset (MPDS2021a)](https://manifesto-project.wzb.eu/datasets). The model was trained on 115,943 sentences from 163 political manifestos in 7 English-speaking countries (Australia, Canada, Ireland, New Zealand, South Africa, United Kingdom, United States). The manifestos were published between 1992 - 2020.
| Country | Count manifestos | Count sentences | Time span |
|----------------|------------------|-----------------|--------------------|
| Australia | 18 | 14,887 | 2010-2016 |
| Ireland | 23 | 24,966 | 2007-2016 |
| Canada | 14 | 12,344 | 2004-2008 & 2015 |
| New Zealand | 46 | 35,079 | 1993-2017 |
| South Africa | 29 | 13,334 | 1994-2019 |
| USA | 9 | 13,188 | 1992 & 2004-2020 |
| United Kingdom | 34 | 30,936 | 1997-2019 |
Canadian manifestos between 2004 and 2008 are used as test data.
The resulting Datasets are higly (!) imbalanced. See Evaluation.
## Evaluation
| Description | Label | Count Train Data | Count Validation Data | Count Test Data | Validation F1-Score | Test F1-Score |
|-------------------------------------------------------------------|-------|------------------|-----------------------|-----------------|---------------------|---------------|
| Foreign Special Relationships: Positive | 0 | 545 | 96 | 60 | 0.43 | 0.45 |
| Foreign Special Relationships: Negative | 1 | 66 | 14 | 22 | 0.22 | 0.09 |
| Anti-Imperialism | 2 | 93 | 16 | 1 | 0.16 | 0.00 |
| Military: Positive | 3 | 1,969 | 356 | 159 | 0.69 | 0.63 |
| Military: Negative | 4 | 489 | 89 | 52 | 0.59 | 0.63 |
| Peace | 5 | 418 | 80 | 49 | 0.57 | 0.64 |
| Internationalism: Positive | 6 | 2,401 | 417 | 404 | 0.60 | 0.54 |
| European Community/Union or Latin America Integration: Positive | 7 | 930 | 156 | 20 | 0.58 | 0.32 |
| Internationalism: Negative | 8 | 209 | 40 | 57 | 0.28 | 0.05 |
| European Community/Union or Latin America Integration: Negative | 9 | 520 | 81 | 0 | 0.39 | - |
| Freedom and Human Rights | 10 | 2,196 | 389 | 76 | 0.50 | 0.34 |
| Democracy | 11 | 3,045 | 534 | 206 | 0.53 | 0.51 |
| Constitutionalism: Positive | 12 | 259 | 48 | 12 | 0.34 | 0.22 |
| Constitutionalism: Negative | 13 | 380 | 72 | 2 | 0.34 | 0.00 |
| Decentralisation: Positive | 14 | 2,791 | 481 | 331 | 0.49 | 0.45 |
| Centralisation: Positive | 15 | 150 | 33 | 71 | 0.11 | 0.00 |
| Governmental and Administrative Efficiency | 16 | 3,905 | 711 | 105 | 0.50 | 0.32 |
| Political Corruption | 17 | 900 | 186 | 234 | 0.59 | 0.55 |
| Political Authority | 18 | 3,488 | 627 | 300 | 0.51 | 0.39 |
| Free Market Economy | 19 | 1,768 | 309 | 53 | 0.40 | 0.16 |
| Incentives: Positive | 20 | 3,100 | 544 | 81 | 0.52 | 0.28 |
| Market Regulation | 21 | 3,562 | 616 | 210 | 0.50 | 0.36 |
| Economic Planning | 22 | 533 | 93 | 67 | 0.31 | 0.12 |
| Corporatism/ Mixed Economy | 23 | 193 | 32 | 23 | 0.28 | 0.33 |
| Protectionism: Positive | 24 | 633 | 103 | 180 | 0.44 | 0.22 |
| Protectionism: Negative | 25 | 723 | 118 | 149 | 0.52 | 0.40 |
| Economic Goals | 26 | 817 | 139 | 148 | 0.05 | 0.00 |
| Keynesian Demand Management | 27 | 160 | 25 | 9 | 0.00 | 0.00 |
| Economic Growth: Positive | 28 | 3,142 | 607 | 374 | 0.53 | 0.30 |
| Technology and Infrastructure: Positive | 29 | 8,643 | 1,529 | 339 | 0.71 | 0.56 |
| Controlled Economy | 30 | 567 | 96 | 94 | 0.47 | 0.16 |
| Nationalisation | 31 | 832 | 157 | 27 | 0.56 | 0.16 |
| Economic Orthodoxy | 32 | 1,721 | 287 | 184 | 0.55 | 0.48 |
| Marxist Analysis: Positive | 33 | 148 | 33 | 0 | 0.20 | - |
| Anti-Growth Economy and Sustainability | 34 | 2,676 | 452 | 250 | 0.43 | 0.33 |
| Environmental Protection | 35 | 6,731 | 1,163 | 934 | 0.70 | 0.67 |
| Culture: Positive | 36 | 2,082 | 358 | 92 | 0.69 | 0.56 |
| Equality: Positive | 37 | 6,630 | 1,126 | 361 | 0.57 | 0.43 |
| Welfare State Expansion | 38 | 13,486 | 2,405 | 990 | 0.72 | 0.61 |
| Welfare State Limitation | 39 | 926 | 151 | 2 | 0.45 | 0.00 |
| Education Expansion | 40 | 7,191 | 1,324 | 274 | 0.78 | 0.63 |
| Education Limitation | 41 | 154 | 27 | 1 | 0.17 | 0.00 |
| National Way of Life: Positive | 42 | 2,105 | 385 | 395 | 0.48 | 0.34 |
| National Way of Life: Negative | 43 | 743 | 147 | 2 | 0.27 | 0.00 |
| Traditional Morality: Positive | 44 | 1,375 | 234 | 19 | 0.55 | 0.14 |
| Traditional Morality: Negative | 45 | 291 | 54 | 38 | 0.30 | 0.23 |
| Law and Order | 46 | 5,582 | 949 | 381 | 0.72 | 0.71 |
| Civic Mindedness: Positive | 47 | 1,348 | 229 | 27 | 0.45 | 0.28 |
| Multiculturalism: Positive | 48 | 2,006 | 355 | 71 | 0.61 | 0.35 |
| Multiculturalism: Negative | 49 | 144 | 31 | 7 | 0.33 | 0.00 |
| Labour Groups: Positive | 50 | 3,856 | 707 | 57 | 0.64 | 0.14 |
| Labour Groups: Negative | 51 | 208 | 35 | 0 | 0.44 | - |
| Agriculture and Farmers | 52 | 2,996 | 490 | 130 | 0.67 | 0.56 |
| Middle Class and Professional Groups | 53 | 271 | 38 | 12 | 0.38 | 0.40 |
| Underprivileged Minority Groups | 54 | 1,417 | 252 | 82 | 0.34 | 0.33 |
| Non-economic Demographic Groups | 55 | 2,429 | 435 | 106 | 0.42 | 0.24 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
```
training_args = TrainingArguments(
warmup_ratio=0.05,
weight_decay=0.1,
learning_rate=5e-05,
fp16 = True,
evaluation_strategy="epoch",
num_train_epochs=5,
per_device_train_batch_size=16,
overwrite_output_dir=True,
per_device_eval_batch_size=16,
save_strategy="no",
logging_dir='logs',
logging_strategy= 'steps',
logging_steps=10,
push_to_hub=True,
hub_strategy="end")
```
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-micro | F1-macro | F1-weighted | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:-----------:|:---------:|:------:|
| 1.7638 | 1.0 | 1812 | 1.6471 | 0.5531 | 0.5531 | 0.3354 | 0.5368 | 0.5531 | 0.5531 |
| 1.4501 | 2.0 | 3624 | 1.5167 | 0.5807 | 0.5807 | 0.3921 | 0.5655 | 0.5807 | 0.5807 |
| 1.0638 | 3.0 | 5436 | 1.5017 | 0.5893 | 0.5893 | 0.4240 | 0.5789 | 0.5893 | 0.5893 |
| 0.9263 | 4.0 | 7248 | 1.5173 | 0.5975 | 0.5975 | 0.4499 | 0.5901 | 0.5975 | 0.5975 |
| 0.7859 | 5.0 | 9060 | 1.5574 | 0.5978 | 0.5978 | 0.4564 | 0.5903 | 0.5978 | 0.5978 |
### Overall evaluation
| Type | Micro F1-Score | Macro F1-Score | Weighted F1-Score |
|----------------|----------------|----------------|-------------------|
| Validation | 0.60 | 0.46 | 0.59 |
| Test | 0.48 | 0.30 | 0.47 |
### Evaluation based on saliency theory
Saliency theory is a theory to analyse politial text data. In sum, parties tend to write about policies in which they think that they are seen as competent.
Voters tend to assign advantages in policy competence in line to the assumed ideology of parties. Therefore you can analyze the share of policies parties tend to write about in their manifestos to analyze the party ideology.
The Manifesto Project presented for such an analysis the rile-index. For a quick overview, check [this](https://manifesto-project.wzb.eu/down/tutorials/main-dataset.html#measuring-parties-left-right-positions).
In the following plot, the predicted and original rile-indices are shown per manifesto in the test dataset. Overall the pearson correlation between the predicted and original rile-indices is 0.95. As alternative, you can use [RoBERTa-RILE](https://huggingface.co/niksmer/RoBERTa-RILE).

### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
|
niksmer/RoBERTa-RILE | 7249994599dd123862e64026d3517e98be502e9f | 2022-03-24T09:19:40.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:mit",
"model-index"
] | text-classification | false | niksmer | null | niksmer/RoBERTa-RILE | 2 | null | transformers | 25,036 | ---
license: mit
metrics:
- accuracy
- precision
- recall
model-index:
- name: RoBERTa-RILE
results: []
widget:
- text: "Russia must end the war."
- text: "Democratic institutions must be supported."
- text: "The state must fight political corruption."
- text: "Our energy economy must be nationalised."
- text: "We must increase social spending."
---
# RoBERTa-RILE
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on data from the [Manifesto Project](https://manifesto-project.wzb.eu/).
## Model description
This model was trained on 115,943 manually annotated sentences to classify text into one of three political categories: "neutral", "left", "right".
## Intended uses & limitations
The model output reproduces the limitations of the dataset in terms of country coverage, time span, domain definitions and potential biases of the annotators - as any supervised machine learning model would. Applying the model to other types of data (other types of texts, countries etc.) will reduce performance.
```python
from transformers import pipeline
import pandas as pd
classifier = pipeline(
task="text-classification",
model="niksmer/RoBERTa-RILE")
# Load text data you want to classify
text = pd.read_csv("example.csv")["text_you_want_to_classify"].to_list()
# Inference
output = classifier(text)
# Print output
pd.DataFrame(output).head()
```
## Training and evaluation data
## Training and evaluation data
RoBERTa-RILE was trained on the English-speaking subset of the [Manifesto Project Dataset (MPDS2021a)](https://manifesto-project.wzb.eu/datasets). The model was trained on 115,943 sentences from 163 political manifestos in 7 English-speaking countries (Australia, Canada, Ireland, New Zealand, South Africa, United Kingdom, United States). The manifestos were published between 1992 - 2020.
| Country | Count manifestos | Count sentences | Time span |
|----------------|------------------|-----------------|--------------------|
| Australia | 18 | 14,887 | 2010-2016 |
| Ireland | 23 | 24,966 | 2007-2016 |
| Canada | 14 | 12,344 | 2004-2008 & 2015 |
| New Zealand | 46 | 35,079 | 1993-2017 |
| South Africa | 29 | 13,334 | 1994-2019 |
| USA | 9 | 13,188 | 1992 & 2004-2020 |
| United Kingdom | 34 | 30,936 | 1997-2019 |
Canadian manifestos between 2004 and 2008 are used as test data.
The Manifesto Project mannually annotates individual sentences from political party manifestos in over 50 main categories - see the [codebook](https://manifesto-project.wzb.eu/down/papers/handbook_2021_version_5.pdf) for the exact definitions of each categorie. It has created a valid left-right-scale, the rile-index, to aaggregate manifesto in a standardized, onde-dimensional political space from left to right based on saliency-theory.
RoBERTa-RILE classifies texts based on the rile index.
### Tain data
Train data was slightly imbalanced.
| Label | Description | Count |
|------------|--------------|--------|
| 0 | neutral | 52,277 |
| 1 | left | 37,106 |
| 2 | right | 26,560 |
Overall count: 115,943
### Validation data
The validation was created by chance.
| Label | Description | Count |
|------------|--------------|--------|
| 0 | neutral | 9,198 |
| 1 | left | 6,637 |
| 2 | right | 4,626 |
Overall count: 20,461
### Test data
The test dataset contains ten canadian manifestos between 2004 and 2008.
| Label | Description | Count |
|------------|--------------|--------|
| 0 | neutral | 3,881 |
| 1 | left | 2,611 |
| 2 | right | 1,838 |
Overall count: 8,330
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
```
training_args = TrainingArguments(
warmup_ratio=0.05,
weight_decay=0.1,
learning_rate=1e-05,
fp16 = True,
evaluation_strategy="epoch",
num_train_epochs=5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
save_strategy="no",
logging_dir='logs',
logging_strategy= 'steps',
logging_steps=10,
push_to_hub=True,
hub_strategy="end")
```
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-micro | F1-macro | F1-weighted | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:-----------:|:---------:|:------:|
| 0.7442 | 1.0 | 1812 | 0.6827 | 0.7120 | 0.7120 | 0.7007 | 0.7126 | 0.7120 | 0.7120 |
| 0.6447 | 2.0 | 3624 | 0.6618 | 0.7281 | 0.7281 | 0.7169 | 0.7281 | 0.7281 | 0.7281 |
| 0.5467 | 3.0 | 5436 | 0.6657 | 0.7309 | 0.7309 | 0.7176 | 0.7295 | 0.7309 | 0.7309 |
| 0.5179 | 4.0 | 7248 | 0.6654 | 0.7346 | 0.7346 | 0.7240 | 0.7345 | 0.7346 | 0.7346 |
| 0.4787 | 5.0 | 9060 | 0.6757 | 0.7350 | 0.7350 | 0.7241 | 0.7347 | 0.7350 | 0.7350 |
### Validation evaluation
| Model | Micro F1-Score | Macro F1-Score | Weighted F1-Score |
|----------------|----------------|----------------|-------------------|
| RoBERTa-RILE | 0.74 | 0.72 | 0.73 |
### Test evaluation
| Model | Micro F1-Score | Macro F1-Score | Weighted F1-Score |
|----------------|----------------|----------------|-------------------|
| RoBERTa-RILE | 0.69 | 0.67 | 0.69 |
### Evaluation per category
| Label | Validation F1-Score | Test F1-Score |
|-----------------------------|---------------------|---------------|
| neutral | 0.77 | 0.74 |
| left | 0.73 | 0.65 |
| right | 0.67 | 0.62 |
### Evaluation based on saliency theory
Saliency theory is a theory to analyse politial text data. In sum, parties tend to write about policies in which they think that they are seen as competent.
Voters tend to assign advantages in policy competence in line to the assumed ideology of parties. Therefore you can analyze the share of policies parties tend to write about in their manifestos to analyze the party ideology.
The Manifesto Project presented for such an analysis the rile-index. For a quick overview, check [this](https://manifesto-project.wzb.eu/down/tutorials/main-dataset.html#measuring-parties-left-right-positions).
In the following plot, the predicted and original rile-indices are shown per manifesto in the test dataset. Overall the pearson correlation between the predicted and original rile-indices is 0.95. As alternative, you can use [ManiBERT](https://huggingface.co/niksmer/ManiBERT).

### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
|
timoneda/XLM-R-Racismo | 638056652a249fe3e4898ab28766816dfc8f2acf | 2021-08-11T18:55:38.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | timoneda | null | timoneda/XLM-R-Racismo | 2 | null | transformers | 25,037 | |
Akash7897/bert-base-cased-wikitext2 | 47d3ea9e1e3e4f114478e7d96adef191b6edf8ef | 2022-03-01T10:29:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Akash7897 | null | Akash7897/bert-base-cased-wikitext2 | 2 | null | transformers | 25,038 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0915 | 1.0 | 2346 | 7.0517 |
| 6.905 | 2.0 | 4692 | 6.8735 |
| 6.8565 | 3.0 | 7038 | 6.8924 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
AndyyyCai/bert-base-uncased-finetuned-copa | 0bb6de71cc070a8941075d0aee15e6ac07f1a35f | 2022-03-01T01:57:06.000Z | [
"pytorch",
"bert",
"multiple-choice",
"transformers"
] | multiple-choice | false | AndyyyCai | null | AndyyyCai/bert-base-uncased-finetuned-copa | 2 | null | transformers | 25,039 | Entry not found |
Sarahliu186/wav2vec2-base-timit-demo-colab | 669be1e890e7fc6c2099fd19d22ce2431439ab69 | 2022-03-01T04:01:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Sarahliu186 | null | Sarahliu186/wav2vec2-base-timit-demo-colab | 2 | null | transformers | 25,040 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
mohamed-illiyas/wav2vec-malayalam-new | f7f371015f15ae427c1306f19e5c75880caecce4 | 2022-03-01T11:43:40.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | mohamed-illiyas | null | mohamed-illiyas/wav2vec-malayalam-new | 2 | null | transformers | 25,041 | Entry not found |
eson/dummy-model | 2610165ee1b00b61a1f0eae4e7da4c854c20642d | 2022-03-01T11:32:22.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | eson | null | eson/dummy-model | 2 | null | transformers | 25,042 | Entry not found |
firqaaa/medbert-base-indonesian | b82fd8549c96ccc23d4eb2caf276e9e110672e96 | 2021-07-20T16:33:10.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | firqaaa | null | firqaaa/medbert-base-indonesian | 2 | null | transformers | 25,043 | Entry not found |
Kevincp560/bart-large-finetuned-pubmed | 5969ae4addb05b036c898b8820cda38bb4c686d1 | 2022-03-01T18:35:04.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:pub_med_summarization_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Kevincp560 | null | Kevincp560/bart-large-finetuned-pubmed | 2 | null | transformers | 25,044 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: bart-large-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 10.946
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-finetuned-pubmed
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8135
- Rouge1: 10.946
- Rouge2: 5.0933
- Rougel: 9.5608
- Rougelsum: 10.4259
- Gen Len: 19.0495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 2.0861 | 1.0 | 4000 | 1.8909 | 8.7344 | 3.6919 | 7.8804 | 8.3305 | 20.0 |
| 1.8996 | 2.0 | 8000 | 1.8261 | 10.2124 | 4.6212 | 8.9842 | 9.7417 | 17.632 |
| 1.7459 | 3.0 | 12000 | 1.8160 | 9.4933 | 4.4117 | 8.3977 | 9.0758 | 16.4775 |
| 1.6258 | 4.0 | 16000 | 1.8136 | 10.8248 | 5.0335 | 9.4286 | 10.3123 | 18.724 |
| 1.5214 | 5.0 | 20000 | 1.8135 | 10.946 | 5.0933 | 9.5608 | 10.4259 | 19.0495 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
spy24/autonlp-US-to-UK-604417040 | e3b02f6759a8ab55af7db62a18f55b9212d863be | 2022-03-01T13:16:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:spy24/autonlp-data-US-to-UK",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | spy24 | null | spy24/autonlp-US-to-UK-604417040 | 2 | null | transformers | 25,045 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- spy24/autonlp-data-US-to-UK
co2_eq_emissions: 3.3271667948644614
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 604417040
- CO2 Emissions (in grams): 3.3271667948644614
## Validation Metrics
- Loss: 1.919085144996643
- Rouge1: 39.2808
- Rouge2: 4.905
- RougeL: 39.113
- RougeLsum: 39.1463
- Gen Len: 3.4611
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-US-to-UK-604417040
``` |
ali2066/twitter_RoBERTa_token_itr0_0.0001_all_01_03_2022-14_26_43 | 95f09402ec9064ab0cb41667bf17c3af45f04e10 | 2022-03-01T13:30:07.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/twitter_RoBERTa_token_itr0_0.0001_all_01_03_2022-14_26_43 | 2 | null | transformers | 25,046 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter_RoBERTa_token_itr0_0.0001_all_01_03_2022-14_26_43
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_token_itr0_0.0001_all_01_03_2022-14_26_43
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2591
- Precision: 0.4174
- Recall: 0.5678
- F1: 0.4811
- Accuracy: 0.8852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.4690 | 0.3732 | 0.1830 | 0.2456 | 0.7509 |
| No log | 2.0 | 60 | 0.3936 | 0.2067 | 0.3559 | 0.2615 | 0.7851 |
| No log | 3.0 | 90 | 0.3019 | 0.3658 | 0.4904 | 0.4190 | 0.8703 |
| No log | 4.0 | 120 | 0.2510 | 0.4387 | 0.5137 | 0.4732 | 0.8889 |
| No log | 5.0 | 150 | 0.2481 | 0.4196 | 0.5511 | 0.4764 | 0.8881 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_45_20 | 3e64ede9217f8623ff3d7558b92d0a90f4018914 | 2022-03-01T13:46:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_45_20 | 2 | null | transformers | 25,047 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_45_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_45_20
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6113
- Precision: 0.0097
- Recall: 0.0145
- F1: 0.0116
- Accuracy: 0.6780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 10 | 0.6399 | 0.0 | 0.0 | 0.0 | 0.6603 |
| No log | 2.0 | 20 | 0.6192 | 0.0 | 0.0 | 0.0 | 0.6603 |
| No log | 3.0 | 30 | 0.6133 | 0.0 | 0.0 | 0.0 | 0.6605 |
| No log | 4.0 | 40 | 0.6142 | 0.0 | 0.0 | 0.0 | 0.6617 |
| No log | 5.0 | 50 | 0.6129 | 0.0 | 0.0 | 0.0 | 0.6632 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_02_39 | 5e9e561a5ef511db04014d179c5e81195b4c1761 | 2022-03-01T14:05:57.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_02_39 | 2 | null | transformers | 25,048 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_02_39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_02_39
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2903
- Precision: 0.2440
- Recall: 0.4465
- F1: 0.3155
- Accuracy: 0.8706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.4378 | 0.0463 | 0.1136 | 0.0658 | 0.7742 |
| No log | 2.0 | 60 | 0.3739 | 0.1472 | 0.3756 | 0.2115 | 0.8284 |
| No log | 3.0 | 90 | 0.3422 | 0.1865 | 0.4330 | 0.2607 | 0.8374 |
| No log | 4.0 | 120 | 0.3286 | 0.2243 | 0.4833 | 0.3064 | 0.8438 |
| No log | 5.0 | 150 | 0.3239 | 0.2356 | 0.4809 | 0.3163 | 0.8490 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
QuickRead/pegasus-reddit-full | 95f15c5c541b22daf2af28c029054f0e040621d9 | 2022-03-03T22:43:54.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | QuickRead | null | QuickRead/pegasus-reddit-full | 2 | null | transformers | 25,049 | Entry not found |
BigSalmon/InformalToFormalLincoln24 | 325578834b2b134380fc3d9ff6a1ea6ec127643b | 2022-03-02T01:11:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln24 | 2 | null | transformers | 25,050 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln24")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln24")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel.
Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle.
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
``` |
Verge/Peterbot | d9857ab4416dd48cffb739a968dd6d9363aaaf17 | 2022-03-02T04:28:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Verge | null | Verge/Peterbot | 2 | null | transformers | 25,051 | ---
tags:
- conversational
---
# Peter from Your Boyfriend Game.
|
sileod/medqa | 1e86578301c4866be815edc33382024b2f12ef75 | 2022-06-09T08:59:34.000Z | [
"pytorch",
"bert",
"multiple-choice",
"transformers"
] | multiple-choice | false | sileod | null | sileod/medqa | 2 | null | transformers | 25,052 | Entry not found |
spy24/autonlp-US-to-UK2-606317091 | 9dad33f25eacd1bc6dbc0cdb3dd7c7278024b49f | 2022-03-02T09:03:19.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:spy24/autonlp-data-US-to-UK2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | spy24 | null | spy24/autonlp-US-to-UK2-606317091 | 2 | 1 | transformers | 25,053 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- spy24/autonlp-data-US-to-UK2
co2_eq_emissions: 1.1913570653422176
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 606317091
- CO2 Emissions (in grams): 1.1913570653422176
## Validation Metrics
- Loss: 1.9264822006225586
- Rouge1: 44.2035
- Rouge2: 6.134
- RougeL: 43.9114
- RougeLsum: 44.0231
- Gen Len: 3.6134
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-US-to-UK2-606317091
``` |
facebook/maskformer-swin-base-coco | 231e5833faa7ac890148d6b53ec6c8e3db8fd50d | 2022-04-04T16:02:06.000Z | [
"pytorch",
"maskformer",
"dataset:coco",
"arxiv:2107.06278",
"transformers",
"vision",
"image-segmentatiom",
"license:apache-2.0"
] | null | false | facebook | null | facebook/maskformer-swin-base-coco | 2 | null | transformers | 25,054 | ---
license: apache-2.0
tags:
- vision
- image-segmentatiom
datasets:
- coco
---
# Mask
Mask model trained on coco. It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing Mask did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses semantic segmentation with a mask classification paradigm instead.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=maskformer) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade")
>>> inputs = feature_extractor(images=image, return_tensors="pt")
>>> model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade")
>>> outputs = model(**inputs)
>>> # model predicts class_queries_logits of shape `(batch_size, num_queries)`
>>> # and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
>>> class_queries_logits = outputs.class_queries_logits
>>> masks_queries_logits = outputs.masks_queries_logits
>>> # you can pass them to feature_extractor for postprocessing
>>> output = feature_extractor.post_process_segmentation(outputs)
>>> output = feature_extractor.post_process_semantic_segmentation(outputs)
>>> output = feature_extractor.post_process_panoptic_segmentation(outputs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). |
jcai1/ss_mrpc | f8df6cd2216489d69902452e7788b63339d72d2f | 2022-03-02T14:32:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jcai1 | null | jcai1/ss_mrpc | 2 | null | transformers | 25,055 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ss_mrpc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ss_mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5960
- Accuracy: 0.8799
- F1: 0.9148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.3655 | 0.8578 | 0.8990 |
| 0.524 | 2.0 | 918 | 0.6061 | 0.8260 | 0.8823 |
| 0.2971 | 3.0 | 1377 | 0.5960 | 0.8799 | 0.9148 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Kuray107/wsj0-full-supervised | 9422f3af882708b1c0403f90e89848ef7139d16c | 2022-03-03T11:16:35.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Kuray107 | null | Kuray107/wsj0-full-supervised | 2 | null | transformers | 25,056 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wsj0-full-supervised
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wsj0-full-supervised
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Wer: 0.0343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.517 | 0.86 | 500 | 2.9475 | 1.0 |
| 2.2387 | 1.72 | 1000 | 0.4004 | 0.3498 |
| 0.3081 | 2.57 | 1500 | 0.1362 | 0.1159 |
| 0.1744 | 3.43 | 2000 | 0.1125 | 0.0929 |
| 0.1285 | 4.29 | 2500 | 0.0894 | 0.0727 |
| 0.1015 | 5.15 | 3000 | 0.0852 | 0.0642 |
| 0.0811 | 6.0 | 3500 | 0.0789 | 0.0614 |
| 0.0748 | 6.86 | 4000 | 0.0746 | 0.0529 |
| 0.0639 | 7.72 | 4500 | 0.0714 | 0.0481 |
| 0.0606 | 8.58 | 5000 | 0.0698 | 0.0489 |
| 0.0525 | 9.43 | 5500 | 0.0747 | 0.0464 |
| 0.0489 | 10.29 | 6000 | 0.0594 | 0.0396 |
| 0.0419 | 11.15 | 6500 | 0.0600 | 0.0359 |
| 0.0414 | 12.01 | 7000 | 0.0612 | 0.0412 |
| 0.0383 | 12.86 | 7500 | 0.0676 | 0.0392 |
| 0.0352 | 13.72 | 8000 | 0.0626 | 0.0388 |
| 0.034 | 14.58 | 8500 | 0.0699 | 0.0372 |
| 0.0309 | 15.44 | 9000 | 0.0807 | 0.0420 |
| 0.0295 | 16.3 | 9500 | 0.0796 | 0.0396 |
| 0.0273 | 17.15 | 10000 | 0.0716 | 0.0376 |
| 0.0271 | 18.01 | 10500 | 0.0657 | 0.0384 |
| 0.0251 | 18.87 | 11000 | 0.0585 | 0.0351 |
| 0.024 | 19.73 | 11500 | 0.0557 | 0.0347 |
| 0.0252 | 20.58 | 12000 | 0.0609 | 0.0327 |
| 0.0231 | 21.44 | 12500 | 0.0720 | 0.0368 |
| 0.0202 | 22.3 | 13000 | 0.0625 | 0.0343 |
| 0.0195 | 23.16 | 13500 | 0.0635 | 0.0372 |
| 0.0201 | 24.01 | 14000 | 0.0582 | 0.0335 |
| 0.0183 | 24.87 | 14500 | 0.0562 | 0.0343 |
| 0.0183 | 25.73 | 15000 | 0.0629 | 0.0335 |
| 0.0175 | 26.59 | 15500 | 0.0593 | 0.0323 |
| 0.017 | 27.44 | 16000 | 0.0631 | 0.0339 |
| 0.0162 | 28.3 | 16500 | 0.0597 | 0.0335 |
| 0.0169 | 29.16 | 17000 | 0.0623 | 0.0343 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
repro-rights-amicus-briefs/bert-base-uncased-finetuned-RRamicus | b561fd7877927a81393f7509ee0f54df398794a1 | 2022-01-10T21:19:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | repro-rights-amicus-briefs | null | repro-rights-amicus-briefs/bert-base-uncased-finetuned-RRamicus | 2 | null | transformers | 25,057 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: reprorights-amicus-bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reprorights-amicus-bert
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7763 | 1.0 | 1479 | 1.6789 |
| 1.76 | 2.0 | 2958 | 1.6199 |
| 1.6881 | 3.0 | 4437 | 1.5683 |
| 1.6424 | 4.0 | 5916 | 1.5432 |
| 1.6131 | 5.0 | 7395 | 1.5269 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
shahp7575/electricidad-base-muchocine-finetuned | 9996b2d766bb44b4fb3b69a30a78bb37a61f5ae7 | 2022-03-03T05:20:16.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"es",
"dataset:muchocine",
"transformers",
"spanish",
"sentiment"
] | text-classification | false | shahp7575 | null | shahp7575/electricidad-base-muchocine-finetuned | 2 | null | transformers | 25,058 | ---
language:
- es
tags:
- spanish
- sentiment
datasets:
- muchocine
widget:
- "Increíble pelicula. ¡Altamente recomendado!"
- "Extremadamente malo. Baja calidad"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-base-muchocine-finetuned
This model fine-tunes [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on [muchocine](https://huggingface.co/datasets/muchocine) dataset for sentiment classification to predict *star_rating*.
### How to use
The model can be used directly with the HuggingFace `pipeline`.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("shahp7575/gpt2-horoscopes")
model = AutoModelWithLMHead.from_pretrained("shahp7575/gpt2-horoscopes")
```
### Examples
```python
from transformers import pipeline
clf = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
clf('Esta película es una joya. Todo fue perfecto: historia, casting, dirección. Me encantó el clímax.')
>>> [{'label': '5', 'score': 0.9658033847808838}]
clf("La historia y el casting fueron geniales.")
>>> [{'label': '4', 'score': 0.6666394472122192}]
clf("Me gustó pero podría ser mejor.")
>>> [{'label': '3', 'score': 0.7013391852378845}]
clf("dinero tirado en esta pelicula")
>>> [{'label': '2', 'score': 0.7564149498939514}]
clf("esta película es una película absolutamente repugnante. odio todo al respecto. gastó tanto dinero.")
>>> [{'label': '1', 'score': 0.3040296733379364}]
```
|
cammy/bart-large-cnn-finetuned-new-100-pad-early | c025145e0545b8c5b570aaed87b83a1910c3ca5b | 2022-03-03T10:23:34.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-finetuned-new-100-pad-early | 2 | null | transformers | 25,059 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-new-100-pad-early
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-new-100-pad-early
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9543
- Rouge1: 21.8858
- Rouge2: 8.1444
- Rougel: 16.5751
- Rougelsum: 19.163
- Gen Len: 66.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 100 | 0.8692 | 20.2714 | 6.206 | 16.3362 | 18.7117 | 66.4 |
| No log | 2.0 | 200 | 0.9543 | 21.8858 | 8.1444 | 16.5751 | 19.163 | 66.8 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
RobW/longformer-base-4096-finetuned-chunk-1 | 0d76916d0d8ffd58ba0bc08d992bd6dcd94c574e | 2022-03-03T15:00:58.000Z | [
"pytorch",
"tensorboard",
"longformer",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | RobW | null | RobW/longformer-base-4096-finetuned-chunk-1 | 2 | null | transformers | 25,060 | Entry not found |
Kuray107/wsj0-5percent-supervised | dbd6938ced244faeaacd50a410309145b6615998 | 2022-03-04T20:16:51.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Kuray107 | null | Kuray107/wsj0-5percent-supervised | 2 | null | transformers | 25,061 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wsj0-5percent-supervised
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wsj0-5percent-supervised
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3883
- Wer: 0.1555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 300
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 6.0248 | 16.67 | 500 | 2.9406 | 1.0 |
| 2.0466 | 33.33 | 1000 | 0.3935 | 0.3300 |
| 0.1486 | 50.0 | 1500 | 0.3091 | 0.1931 |
| 0.052 | 66.67 | 2000 | 0.3562 | 0.2052 |
| 0.0309 | 83.33 | 2500 | 0.3252 | 0.1773 |
| 0.0228 | 100.0 | 3000 | 0.3360 | 0.1652 |
| 0.0177 | 116.67 | 3500 | 0.3423 | 0.1603 |
| 0.0142 | 133.33 | 4000 | 0.3416 | 0.1611 |
| 0.0119 | 150.0 | 4500 | 0.3663 | 0.1583 |
| 0.0094 | 166.67 | 5000 | 0.3617 | 0.1567 |
| 0.0093 | 183.33 | 5500 | 0.3738 | 0.1668 |
| 0.0079 | 200.0 | 6000 | 0.3881 | 0.1652 |
| 0.0065 | 216.67 | 6500 | 0.3752 | 0.1611 |
| 0.0056 | 233.33 | 7000 | 0.3798 | 0.1603 |
| 0.0057 | 250.0 | 7500 | 0.3944 | 0.1624 |
| 0.0047 | 266.67 | 8000 | 0.4038 | 0.1583 |
| 0.0041 | 283.33 | 8500 | 0.3928 | 0.1547 |
| 0.0036 | 300.0 | 9000 | 0.3883 | 0.1555 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
T-202/dummy-model | c08f687ba81e93970c97e75dc5f3a994ee127c03 | 2022-03-03T16:21:20.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | T-202 | null | T-202/dummy-model | 2 | null | transformers | 25,062 | Entry not found |
Kevincp560/t5-small-finetuned-pubmed | 5e057adad16a7437e8f1b68fce77b6df2f485171 | 2022-03-03T17:22:09.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:pub_med_summarization_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Kevincp560 | null | Kevincp560/t5-small-finetuned-pubmed | 2 | null | transformers | 25,063 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: t5-small-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 8.8295
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-pubmed
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2635
- Rouge1: 8.8295
- Rouge2: 3.2594
- Rougel: 7.9975
- Rougelsum: 8.4483
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 2.5892 | 1.0 | 4000 | 2.3616 | 10.1169 | 3.9666 | 8.8854 | 9.5836 | 19.0 |
| 2.559 | 2.0 | 8000 | 2.3045 | 9.4321 | 3.5398 | 8.424 | 8.984 | 19.0 |
| 2.5029 | 3.0 | 12000 | 2.2820 | 9.1658 | 3.3686 | 8.2222 | 8.7311 | 19.0 |
| 2.4673 | 4.0 | 16000 | 2.2692 | 8.8973 | 3.2617 | 8.0395 | 8.5046 | 19.0 |
| 2.4331 | 5.0 | 20000 | 2.2635 | 8.8295 | 3.2594 | 7.9975 | 8.4483 | 19.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Kevincp560/wikihow-t5-small-finetuned-pubmed | abecc8017bc5d8964a4cb91ef5bdf4ead76fec67 | 2022-03-03T20:22:04.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:pub_med_summarization_dataset",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Kevincp560 | null | Kevincp560/wikihow-t5-small-finetuned-pubmed | 2 | null | transformers | 25,064 | ---
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: wikihow-t5-small-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 8.9619
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wikihow-t5-small-finetuned-pubmed
This model is a fine-tuned version of [deep-learning-analytics/wikihow-t5-small](https://huggingface.co/deep-learning-analytics/wikihow-t5-small) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2702
- Rouge1: 8.9619
- Rouge2: 3.2719
- Rougel: 8.1558
- Rougelsum: 8.5714
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.5984 | 1.0 | 4000 | 2.3696 | 10.237 | 3.8609 | 8.9776 | 9.677 | 19.0 |
| 2.5677 | 2.0 | 8000 | 2.3132 | 9.302 | 3.4499 | 8.3816 | 8.8831 | 19.0 |
| 2.5038 | 3.0 | 12000 | 2.2884 | 9.0578 | 3.3103 | 8.23 | 8.6723 | 19.0 |
| 2.4762 | 4.0 | 16000 | 2.2758 | 9.0001 | 3.2882 | 8.1845 | 8.6084 | 19.0 |
| 2.4393 | 5.0 | 20000 | 2.2702 | 8.9619 | 3.2719 | 8.1558 | 8.5714 | 19.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
eson/kplug-sum | 3465fa4abce77aa7eef8cae767e6b53b2ba1eed4 | 2022-03-04T03:17:17.000Z | [
"pytorch",
"bart",
"feature-extraction",
"transformers"
] | feature-extraction | false | eson | null | eson/kplug-sum | 2 | null | transformers | 25,065 | Entry not found |
zenham/khemx | 3f4b7addd449766b345c386c7a680db0ad3737f1 | 2022-03-05T06:45:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | zenham | null | zenham/khemx | 2 | null | transformers | 25,066 | ---
tags:
- conversational
---
#khemx DialoGPT Model |
mmaguero/gn-bert-large-cased | 376900a847948fad823618ec54f5b23a039daba7 | 2022-03-06T08:10:46.000Z | [
"pytorch",
"bert",
"fill-mask",
"gn",
"dataset:wikipedia",
"dataset:wiktionary",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | mmaguero | null | mmaguero/gn-bert-large-cased | 2 | null | transformers | 25,067 | ---
language: gn
license: mit
datasets:
- wikipedia
- wiktionary
widget:
- text: "Paraguay ha'e peteĩ táva oĩva [MASK] retãme "
---
# BERT-i-large-cased (gnBERT-large-cased)
A pre-trained BERT model for **Guarani** (24 layers, cased). Trained on Wikipedia + Wiktionary (~800K tokens).
|
nielsr/dpt-large-redesign | 42b643d94ca72628c2e944f69b4f2648e9fbd85d | 2022-03-04T17:54:17.000Z | [
"pytorch",
"dpt",
"transformers"
] | null | false | nielsr | null | nielsr/dpt-large-redesign | 2 | null | transformers | 25,068 | Entry not found |
Ebtihal/AraBertMo_base_V10 | c127221347d465d427b71fd4db3e2d753cb201f3 | 2022-03-15T19:10:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"ar",
"dataset:OSCAR",
"transformers",
"Fill-Mask",
"autotrain_compatible"
] | fill-mask | false | Ebtihal | null | Ebtihal/AraBertMo_base_V10 | 2 | null | transformers | 25,069 | Arabic Model AraBertMo_base_V10
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
- text: " السلام عليكم ورحمة[MASK] وبركاتة"
- text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
- text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V10' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 30024| 10 | 64 | 4700 | 9h 13m 43s | 7.2395 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V10")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V10")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
|
azaninello/gpt2-finetuned-shrooms | 12224bf655af8cc072dafdcae36dd2e697e0783e | 2022-03-06T13:16:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | azaninello | null | azaninello/gpt2-finetuned-shrooms | 2 | null | transformers | 25,070 | Entry not found |
tmills/cnlpt-negation-roberta-sharpseed | 3ed14dd5f69ac3c3ecf9ac86111448063186996c | 2022-03-04T21:16:45.000Z | [
"pytorch",
"cnlpt",
"transformers"
] | null | false | tmills | null | tmills/cnlpt-negation-roberta-sharpseed | 2 | null | transformers | 25,071 | Entry not found |
akadriu/wav2vec2-large-xlsr-53-Total2e-4_3 | cf087cf53636d01170dbaa7e0a14deabf4dc8724 | 2022-03-14T11:05:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | akadriu | null | akadriu/wav2vec2-large-xlsr-53-Total2e-4_3 | 2 | null | transformers | 25,072 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53-Total2e-4_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-Total2e-4_3
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2893
- Wer: 0.1863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.16 | 0.1 | 200 | 2.9123 | 0.9707 |
| 2.4599 | 0.2 | 400 | 0.8145 | 0.6906 |
| 1.0523 | 0.3 | 600 | 0.5247 | 0.4823 |
| 0.8965 | 0.4 | 800 | 0.4391 | 0.4416 |
| 0.7994 | 0.5 | 1000 | 0.3889 | 0.3773 |
| 0.7491 | 0.6 | 1200 | 0.3604 | 0.3305 |
| 0.7425 | 0.7 | 1400 | 0.3543 | 0.3277 |
| 0.7253 | 0.8 | 1600 | 0.3397 | 0.3143 |
| 0.7221 | 0.9 | 1800 | 0.3341 | 0.2979 |
| 0.6853 | 1.0 | 2000 | 0.3244 | 0.2906 |
| 0.6107 | 1.1 | 2200 | 0.3127 | 0.2771 |
| 0.6233 | 1.2 | 2400 | 0.3116 | 0.2721 |
| 0.6214 | 1.3 | 2600 | 0.3256 | 0.2671 |
| 0.6511 | 1.4 | 2800 | 0.3019 | 0.2570 |
| 0.6491 | 1.5 | 3000 | 0.2961 | 0.2576 |
| 0.6411 | 1.6 | 3200 | 0.2963 | 0.2535 |
| 0.5963 | 1.7 | 3400 | 0.2939 | 0.2526 |
| 0.6146 | 1.8 | 3600 | 0.2908 | 0.2490 |
| 0.6291 | 1.9 | 3800 | 0.2851 | 0.2448 |
| 0.6154 | 2.0 | 4000 | 0.2861 | 0.2424 |
| 0.5652 | 2.1 | 4200 | 0.2852 | 0.2411 |
| 0.5648 | 2.2 | 4400 | 0.2856 | 0.2350 |
| 0.5365 | 2.3 | 4600 | 0.2802 | 0.2395 |
| 0.5855 | 2.4 | 4800 | 0.2883 | 0.2374 |
| 0.5978 | 2.5 | 5000 | 0.2855 | 0.2364 |
| 0.5863 | 2.6 | 5200 | 0.2736 | 0.2277 |
| 0.5569 | 2.7 | 5400 | 0.2746 | 0.2293 |
| 0.5628 | 2.8 | 5600 | 0.2719 | 0.2249 |
| 0.5655 | 2.9 | 5800 | 0.2653 | 0.2224 |
| 0.5578 | 3.0 | 6000 | 0.2685 | 0.2243 |
| 0.5303 | 3.1 | 6200 | 0.2696 | 0.2204 |
| 0.5316 | 3.2 | 6400 | 0.2733 | 0.2247 |
| 0.5476 | 3.3 | 6600 | 0.2716 | 0.2203 |
| 0.5326 | 3.4 | 6800 | 0.2697 | 0.2209 |
| 0.5375 | 3.5 | 7000 | 0.2701 | 0.2197 |
| 0.5364 | 3.6 | 7200 | 0.2655 | 0.2165 |
| 0.503 | 3.7 | 7400 | 0.2650 | 0.2125 |
| 0.5284 | 3.8 | 7600 | 0.2672 | 0.2162 |
| 0.5251 | 3.9 | 7800 | 0.2669 | 0.2172 |
| 0.5299 | 4.0 | 8000 | 0.2632 | 0.2081 |
| 0.4904 | 4.1 | 8200 | 0.2674 | 0.2099 |
| 0.496 | 4.2 | 8400 | 0.2700 | 0.2143 |
| 0.5067 | 4.3 | 8600 | 0.2648 | 0.2090 |
| 0.506 | 4.4 | 8800 | 0.2595 | 0.2069 |
| 0.4795 | 4.5 | 9000 | 0.2653 | 0.2072 |
| 0.5149 | 4.6 | 9200 | 0.2618 | 0.2073 |
| 0.4786 | 4.7 | 9400 | 0.2632 | 0.2058 |
| 0.5056 | 4.8 | 9600 | 0.2674 | 0.2123 |
| 0.5059 | 4.9 | 9800 | 0.2642 | 0.2115 |
| 0.5119 | 5.0 | 10000 | 0.2672 | 0.2089 |
| 0.4619 | 5.1 | 10200 | 0.2658 | 0.2062 |
| 0.4647 | 5.2 | 10400 | 0.2664 | 0.2025 |
| 0.4707 | 5.3 | 10600 | 0.2656 | 0.2084 |
| 0.486 | 5.4 | 10800 | 0.2728 | 0.2029 |
| 0.4785 | 5.5 | 11000 | 0.2653 | 0.2004 |
| 0.4895 | 5.6 | 11200 | 0.2835 | 0.2119 |
| 0.4519 | 5.7 | 11400 | 0.2715 | 0.2061 |
| 0.484 | 5.8 | 11600 | 0.2663 | 0.2071 |
| 0.4734 | 5.9 | 11800 | 0.2615 | 0.2023 |
| 0.4563 | 6.0 | 12000 | 0.2604 | 0.1997 |
| 0.4193 | 6.1 | 12200 | 0.2708 | 0.2015 |
| 0.4516 | 6.2 | 12400 | 0.2724 | 0.2018 |
| 0.4609 | 6.3 | 12600 | 0.2745 | 0.2004 |
| 0.43 | 6.4 | 12800 | 0.2716 | 0.1979 |
| 0.4424 | 6.5 | 13000 | 0.2674 | 0.1963 |
| 0.4589 | 6.6 | 13200 | 0.2622 | 0.1977 |
| 0.4458 | 6.7 | 13400 | 0.2668 | 0.1994 |
| 0.4233 | 6.8 | 13600 | 0.2739 | 0.1978 |
| 0.4557 | 6.9 | 13800 | 0.2692 | 0.1972 |
| 0.4472 | 7.0 | 14000 | 0.2686 | 0.1942 |
| 0.4193 | 7.1 | 14200 | 0.2843 | 0.1959 |
| 0.4033 | 7.2 | 14400 | 0.2767 | 0.1945 |
| 0.4266 | 7.3 | 14600 | 0.2808 | 0.1931 |
| 0.419 | 7.4 | 14800 | 0.2801 | 0.1945 |
| 0.4352 | 7.5 | 15000 | 0.2764 | 0.1934 |
| 0.4248 | 7.6 | 15200 | 0.2818 | 0.1938 |
| 0.4001 | 7.7 | 15400 | 0.2754 | 0.1931 |
| 0.415 | 7.8 | 15600 | 0.2799 | 0.1916 |
| 0.4056 | 7.9 | 15800 | 0.2746 | 0.1916 |
| 0.419 | 8.0 | 16000 | 0.2789 | 0.1909 |
| 0.3974 | 8.1 | 16200 | 0.2913 | 0.1897 |
| 0.3999 | 8.2 | 16400 | 0.2894 | 0.1899 |
| 0.4179 | 8.3 | 16600 | 0.2819 | 0.1918 |
| 0.4081 | 8.4 | 16800 | 0.2868 | 0.1910 |
| 0.3963 | 8.5 | 17000 | 0.2835 | 0.1889 |
| 0.3748 | 8.6 | 17200 | 0.2841 | 0.1903 |
| 0.375 | 8.7 | 17400 | 0.2820 | 0.1874 |
| 0.3857 | 8.8 | 17600 | 0.2865 | 0.1872 |
| 0.3901 | 8.9 | 17800 | 0.2824 | 0.1882 |
| 0.4067 | 9.0 | 18000 | 0.2838 | 0.1887 |
| 0.3711 | 9.1 | 18200 | 0.2892 | 0.1897 |
| 0.3661 | 9.2 | 18400 | 0.2889 | 0.1883 |
| 0.3796 | 9.3 | 18600 | 0.2876 | 0.1886 |
| 0.3932 | 9.4 | 18800 | 0.2948 | 0.1877 |
| 0.3894 | 9.5 | 19000 | 0.2896 | 0.1884 |
| 0.3643 | 9.6 | 19200 | 0.2897 | 0.1868 |
| 0.384 | 9.7 | 19400 | 0.2887 | 0.1867 |
| 0.3951 | 9.8 | 19600 | 0.2905 | 0.1862 |
| 0.3595 | 9.9 | 19800 | 0.2893 | 0.1866 |
| 0.3758 | 10.0 | 20000 | 0.2893 | 0.1863 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
peterhsu/test-bert-finetuned-en-zh_TW-accelerate | 62b586a834fa85454f692183120b1c65008e8e6e | 2022-03-10T09:44:01.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | peterhsu | null | peterhsu/test-bert-finetuned-en-zh_TW-accelerate | 2 | null | transformers | 25,073 | Entry not found |
infinitylyj/DialogGPT-small-general | a3a568ce5876014650d8650e62faceae138d52b1 | 2022-03-05T10:29:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | infinitylyj | null | infinitylyj/DialogGPT-small-general | 2 | null | transformers | 25,074 | ---
tags:
- conversational
---
# General DialogGPT Model
|
mp6kv/main_intent_test | 1da99009812f22f0749779880a24e6b039fa3a02 | 2022-03-05T19:18:02.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | mp6kv | null | mp6kv/main_intent_test | 2 | null | transformers | 25,075 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: main_intent_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# main_intent_test
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
Custom data generated labeling text according to these five categories.
Five categories represent the five essential intents of a user for the ACTS scenario.
- Connect : Greetings and introduction with the student
- Pump : Asking the student for information
- Inform : Providing information to the student
- Feedback : Praising the student (positive feedback) or informing the student they are not on the right path (negative feedback)
- None : Not related to scenario
Takes a user input of string text and classifies it according to one of five categories.
## Intended uses & limitations
from transformers import pipeline
classifier = pipeline("text-classification",model="mp6kv/main_intent_test")
output = classifier("great job, you're getting it!")
score = output[0]['score']
label = output[0]['label']
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
BigSalmon/Points3 | 8f25b6907c115d9615ab2539d9a74c69edbb7a0c | 2022-03-05T22:03:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/Points3 | 2 | null | transformers | 25,076 | Example Prompt:
```
###
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
###
-
``` |
ahazeemi/hindiasr | 783d4cfdcf42a1144ab80c166207050027a7929b | 2022-03-06T13:02:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | ahazeemi | null | ahazeemi/hindiasr | 2 | null | transformers | 25,077 | Entry not found |
adalbertojunior/test-128-uncased-3 | 3c42ef0ae40c3733febbe66637dc54336e7df896 | 2022-03-06T13:43:52.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | adalbertojunior | null | adalbertojunior/test-128-uncased-3 | 2 | null | transformers | 25,078 | Entry not found |
princeton-nlp/datamux-ner-10 | 66695a821e1ab8ed0ba53337a4534737147e9f23 | 2022-03-06T17:10:41.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-ner-10 | 2 | null | transformers | 25,079 | Entry not found |
MrAnderson/bert-base-512-full-trivia | 79b8ccf479aecb55f2b216893b0f5d45ab345f44 | 2022-03-07T14:20:45.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | MrAnderson | null | MrAnderson/bert-base-512-full-trivia | 2 | null | transformers | 25,080 | Entry not found |
clu-ling/roberta-finetuned-stsbenchmark | bc9f394dd3541168ef4bf9d8864aff96c3944f77 | 2022-03-06T21:32:04.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | clu-ling | null | clu-ling/roberta-finetuned-stsbenchmark | 2 | 0 | sentence-transformers | 25,081 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "What is the large instrument the man is playing?"
docs = ["A man is playing a large flute.", "A man is playing a flute."]
#Load the model
model = SentenceTransformer('clu-ling/roberta-finetuned-stsbenchmark')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 125 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': True}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AdarshRavis/BabishBot | 9dcb53aba39168947f61e166f92388a4eef92134 | 2022-03-12T06:22:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:mit"
] | text-generation | false | AdarshRavis | null | AdarshRavis/BabishBot | 2 | null | transformers | 25,082 | ---
license: mit
---
This is a text generation algorithm that is fine-tuned on subtitles from Binging with Babish (https://www.youtube.com/c/bingingwithbabish)
Just type in your starting sentence, click "compute" and see what the model has to say! The first time you run the model, it may take a minute to load (after that it takes ~6 seconds to run)
This is created with the help of aitextgen (https://github.com/minimaxir/aitextgen), using a pertained 124M gpt-2 model
Disclaimer:
The use of this model is for parody only, and is not affiliated with Binging with Babish or the Babish Culinary Universe. |
Ensheng/Code-Roberta-MLM | 5b4bf709a1578448c98dee1f32feb2576f394edd | 2022-03-07T05:21:22.000Z | [
"pytorch"
] | null | false | Ensheng | null | Ensheng/Code-Roberta-MLM | 2 | 1 | null | 25,083 | Entry not found |
Kuray107/librispeech-semi-supervised-without-LM | a9dacdf8229a6899fa7b7005d2d0747ee18b5a2b | 2022-03-07T17:14:04.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | Kuray107 | null | Kuray107/librispeech-semi-supervised-without-LM | 2 | null | transformers | 25,084 | ---
tags:
- generated_from_trainer
model-index:
- name: librispeech-semi-supervised-without-LM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# librispeech-semi-supervised-without-LM
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1837
- Wer: 0.0580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0565 | 0.56 | 1000 | 0.1354 | 0.0641 |
| 0.0548 | 1.12 | 2000 | 0.1320 | 0.0628 |
| 0.0478 | 1.68 | 3000 | 0.1247 | 0.0612 |
| 0.0451 | 2.24 | 4000 | 0.1256 | 0.0613 |
| 0.0401 | 2.8 | 5000 | 0.1269 | 0.0606 |
| 0.035 | 3.36 | 6000 | 0.1370 | 0.0595 |
| 0.0344 | 3.92 | 7000 | 0.1280 | 0.0589 |
| 0.031 | 4.48 | 8000 | 0.1350 | 0.0589 |
| 0.031 | 5.04 | 9000 | 0.1418 | 0.0614 |
| 0.0278 | 5.61 | 10000 | 0.1382 | 0.0604 |
| 0.0272 | 6.17 | 11000 | 0.1502 | 0.0615 |
| 0.0246 | 6.73 | 12000 | 0.1443 | 0.0609 |
| 0.0233 | 7.29 | 13000 | 0.1548 | 0.0589 |
| 0.0224 | 7.85 | 14000 | 0.1547 | 0.0599 |
| 0.0202 | 8.41 | 15000 | 0.1570 | 0.0590 |
| 0.0199 | 8.97 | 16000 | 0.1564 | 0.0594 |
| 0.0186 | 9.53 | 17000 | 0.1598 | 0.0595 |
| 0.0187 | 10.09 | 18000 | 0.1657 | 0.0585 |
| 0.017 | 10.65 | 19000 | 0.1690 | 0.0584 |
| 0.016 | 11.21 | 20000 | 0.1689 | 0.0588 |
| 0.0156 | 11.77 | 21000 | 0.1745 | 0.0585 |
| 0.0151 | 12.33 | 22000 | 0.1777 | 0.0583 |
| 0.0144 | 12.89 | 23000 | 0.1778 | 0.0590 |
| 0.0142 | 13.45 | 24000 | 0.1803 | 0.0585 |
| 0.0137 | 14.01 | 25000 | 0.1796 | 0.0581 |
| 0.0132 | 14.57 | 26000 | 0.1837 | 0.0580 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
cammy/bart-large-cnn-1000-sum-pad-early-tfidf | bd3f954645a14e5cdf293ebb4e247790a98bc13d | 2022-03-07T05:11:36.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-1000-sum-pad-early-tfidf | 2 | null | transformers | 25,085 | Entry not found |
cammy/bart-large-cnn-1000-sum-pad-early-tfidf1 | 65af793df8accffeaf492c189b8e1d98f8ffe983 | 2022-03-07T05:57:08.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-1000-sum-pad-early-tfidf1 | 2 | null | transformers | 25,086 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-1000-sum-pad-early-tfidf1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-1000-sum-pad-early-tfidf1
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8527
- Rouge1: 24.6303
- Rouge2: 11.0396
- Rougel: 19.1384
- Rougelsum: 20.94
- Gen Len: 67.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.3304 | 1.0 | 1000 | 0.7234 | 25.9428 | 12.5482 | 21.0784 | 23.6041 | 64.68 |
| 0.1502 | 2.0 | 2000 | 0.8527 | 24.6303 | 11.0396 | 19.1384 | 20.94 | 67.84 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cammy/bart-large-cnn-finetuned-weaksup-1000-pad-early-new1 | 0496351eadf1df84f80c095f5f06d8ed28b58156 | 2022-03-07T06:18:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-finetuned-weaksup-1000-pad-early-new1 | 2 | null | transformers | 25,087 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-weaksup-1000-pad-early-new1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-1000-pad-early-new1
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4948
- Rouge1: 28.1465
- Rouge2: 13.4076
- Rougel: 22.2763
- Rougelsum: 25.2087
- Gen Len: 68.58
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.156 | 1.0 | 1000 | 0.4377 | 27.8782 | 13.1274 | 21.2329 | 24.6465 | 66.25 |
| 0.0843 | 2.0 | 2000 | 0.4948 | 28.1465 | 13.4076 | 22.2763 | 25.2087 | 68.58 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
newhope/roberta-base-finetuned-cola | cc02683dfec1b966aae81bf25354664d1d787e14 | 2022-03-11T07:05:50.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | newhope | null | newhope/roberta-base-finetuned-cola | 2 | null | transformers | 25,088 | Entry not found |
Splend1dchan/byt5small-glue-mprc | 7a71361e4dc20f999246f331bfeca037f92f5297 | 2022-03-07T11:14:07.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Splend1dchan | null | Splend1dchan/byt5small-glue-mprc | 2 | null | transformers | 25,089 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: byt5small-glue-mprc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5small-glue-mprc
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.6.0a0+bf2bbd9
- Datasets 1.12.1
- Tokenizers 0.11.6
|
zhiweitong/dpr-ctx_encoder-single-nq-base | 18a65ba1ff4d3de11b19318cb44fe4a596cf9e0b | 2022-03-08T07:28:29.000Z | [
"pytorch",
"dpr",
"en",
"dataset:wiki_dpr",
"dataset:natural_questions",
"transformers"
] | null | false | zhiweitong | null | zhiweitong/dpr-ctx_encoder-single-nq-base | 2 | null | transformers | 25,090 | ---
language: en
datasets:
- wiki_dpr
- natural_questions
---
# dpr-ctx_encoder-single-nq-base
This encoder is used with [zhiweitong/dpr-answer_encoder-single-nq-base](https://huggingface.co/zhiweitong/dpr-answer_encoder-single-nq-base) |
spasis/distilbert-base-uncased-finetuned-imdb | 91eef200990b53e0c2cce00d84daf2e57528202f | 2022-03-07T13:50:20.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | spasis | null | spasis/distilbert-base-uncased-finetuned-imdb | 2 | null | transformers | 25,091 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 40 | 2.5471 |
| No log | 2.0 | 80 | 2.4606 |
| No log | 3.0 | 120 | 2.5469 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
kimbob/distilbert-base-uncased-finetuned-emotion | 22f7c3c09524d2100167d4d39be7141c8a51d387 | 2022-03-07T15:08:41.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | kimbob | null | kimbob/distilbert-base-uncased-finetuned-emotion | 2 | null | transformers | 25,092 | Entry not found |
OrfeasTsk/bert-base-uncased-finetuned-squadv2 | 2775fe933206b4390c66ba0cd06dbc7b40a45085 | 2022-03-08T18:35:59.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | OrfeasTsk | null | OrfeasTsk/bert-base-uncased-finetuned-squadv2 | 2 | null | transformers | 25,093 | { 'max_seq_length': 384,
'batch_size': 8,
'learning_rate': {'val': 5e-5, 'schelduler': 'Linear'},
'max_clip_norm': None,
'epochs': 2
} |
sudoparsa/wav2vec2-base-finetuned-ks | d5aaf5d477cb966f65ac68effdd87b3e1cb57849 | 2022-03-09T22:06:19.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:superb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | sudoparsa | null | sudoparsa/wav2vec2-base-finetuned-ks | 2 | null | transformers | 25,094 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0894
- Accuracy: 0.9828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.5003 | 1.0 | 399 | 0.9643 | 0.4284 |
| 0.1868 | 2.0 | 798 | 0.9748 | 0.1628 |
| 0.1413 | 3.0 | 1197 | 0.9796 | 0.1128 |
| 0.0965 | 4.0 | 1596 | 0.0950 | 0.9826 |
| 0.0915 | 5.0 | 1995 | 0.0894 | 0.9828 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Splend1dchan/byt5small-glue-mnli | 297f34c755f4a83b57f7baade1319a5f302beee9 | 2022-03-10T08:40:27.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Splend1dchan | null | Splend1dchan/byt5small-glue-mnli | 2 | null | transformers | 25,095 | byt5 finetuned on MNLI dataset for 3 epochs, with lr=1e-4
valid matched acc = 0.80
|
z5ying/mbart-large-cc25-finetuned-source-to-target | 21dbc142b81086af0230ebe04c05e9ff095d0ed3 | 2022-04-01T03:43:40.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | z5ying | null | z5ying/mbart-large-cc25-finetuned-source-to-target | 2 | null | transformers | 25,096 | ---
tags:
- generated_from_trainer
model-index:
- name: mbart-large-cc25-finetuned-source-to-target
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-finetuned-source-to-target
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
|
SuperAI2-Machima/mt5-small-translation_english-thai | 90cefbb6e940d91e5e5d74f0d75dc181294c63f2 | 2022-03-07T19:52:34.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SuperAI2-Machima | null | SuperAI2-Machima/mt5-small-translation_english-thai | 2 | null | transformers | 25,097 | Entry not found |
GermanT5/t5-efficient-gc4-german-base-nl36-old | 4937f3199f0e8764592aea53b22788f118ef0dc2 | 2022-02-27T10:11:05.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | GermanT5 | null | GermanT5/t5-efficient-gc4-german-base-nl36-old | 2 | 1 | transformers | 25,098 | Entry not found |
Splend1dchan/byt5small-squad1024 | 7ff00f6f1cec8213553466c5a2e3846d51c932ac | 2022-03-08T15:21:07.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Splend1dchan | null | Splend1dchan/byt5small-squad1024 | 2 | null | transformers | 25,099 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.