modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sgugger/bert-finetuned-mrpc | 947a164a8bf38475ba012dbdff893aa98283386c | 2021-09-14T17:10:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | sgugger | null | sgugger/bert-finetuned-mrpc | 2 | null | transformers | 24,700 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8602941176470589
- name: F1
type: f1
value: 0.9032258064516129
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5152
- Accuracy: 0.8603
- F1: 0.9032
- Combined Score: 0.8818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| No log | 1.0 | 230 | 0.3668 | 0.8431 | 0.8881 | 0.8656 |
| No log | 2.0 | 460 | 0.3751 | 0.8578 | 0.9017 | 0.8798 |
| 0.4264 | 3.0 | 690 | 0.5152 | 0.8603 | 0.9032 | 0.8818 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.10.3.dev0
- Tokenizers 0.10.3
|
sgugger/my-bert-model | 809ca368a19b9403b2b2218ad40c9ccbcfb9b614 | 2021-10-04T15:25:17.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | sgugger | null | sgugger/my-bert-model | 2 | 1 | transformers | 24,701 | Entry not found |
sgugger/my-finetuned-bert-mprc | fd57beeb0e919147f7b908fd45063adff5ee5346 | 2021-09-20T22:07:56.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | sgugger | null | sgugger/my-finetuned-bert-mprc | 2 | null | transformers | 24,702 | Entry not found |
shahukareem/dhivehi-roberta-base | 6c8515307f85c118a2eb8df86d417ac103bd4fef | 2021-07-10T00:19:12.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"dv",
"transformers",
"autotrain_compatible"
] | fill-mask | false | shahukareem | null | shahukareem/dhivehi-roberta-base | 2 | null | transformers | 24,703 | ---
language: dv
tags:
- dv
- roberta
widget:
- text: "<mask> މާލެ އަކީ ދިވެހިރާއްޖޭގެ"
---
# Dhivehi Roberta Base - Oscar
## Description
RoBERTA pretrained from scratch using Jax/Flax backend and with the Dhivehi Oscar Corpus only.
|
shahukareem/wav2vec2-xls-r-1b-dv-with-lm | b8138f5f00dd233328ea1ad53f02864bb3456da0 | 2022-02-19T04:02:40.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | shahukareem | null | shahukareem/wav2vec2-xls-r-1b-dv-with-lm | 2 | null | transformers | 24,704 | # wav2vec2-xls-r-1b-dv-with-lm
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset. |
shimu/bert_cn_finetuning | 95bdbec1ec2e0305386143f498555c34fac6a6c1 | 2021-09-07T00:55:47.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | shimu | null | shimu/bert_cn_finetuning | 2 | null | transformers | 24,705 | Entry not found |
shivam/xls-r-300m-hindi | 9c307f7f2c78bcf0fd83c8dd9491cb77e2591ef6 | 2022-01-31T16:58:54.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shivam | null | shivam/xls-r-300m-hindi | 2 | null | transformers | 24,706 | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8111
- Wer: 0.5177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9733 | 2.59 | 500 | 5.0697 | 1.0 |
| 3.3839 | 5.18 | 1000 | 3.3518 | 1.0 |
| 2.0596 | 7.77 | 1500 | 1.3992 | 0.7869 |
| 1.6102 | 10.36 | 2000 | 1.0712 | 0.6754 |
| 1.4587 | 12.95 | 2500 | 0.9280 | 0.6361 |
| 1.3667 | 15.54 | 3000 | 0.9281 | 0.6155 |
| 1.3042 | 18.13 | 3500 | 0.9037 | 0.5921 |
| 1.2544 | 20.73 | 4000 | 0.8996 | 0.5824 |
| 1.2274 | 23.32 | 4500 | 0.8934 | 0.5797 |
| 1.1763 | 25.91 | 5000 | 0.8643 | 0.5760 |
| 1.149 | 28.5 | 5500 | 0.8251 | 0.5544 |
| 1.1207 | 31.09 | 6000 | 0.8506 | 0.5527 |
| 1.091 | 33.68 | 6500 | 0.8370 | 0.5366 |
| 1.0613 | 36.27 | 7000 | 0.8345 | 0.5352 |
| 1.0495 | 38.86 | 7500 | 0.8380 | 0.5321 |
| 1.0345 | 41.45 | 8000 | 0.8285 | 0.5269 |
| 1.0297 | 44.04 | 8500 | 0.7836 | 0.5141 |
| 1.027 | 46.63 | 9000 | 0.8120 | 0.5180 |
| 0.9876 | 49.22 | 9500 | 0.8109 | 0.5188 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
shivangi/CoLA_64_128_output | ae2b41738acb7ff5caf3d822e28df4e1b3740885 | 2021-05-20T05:50:08.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | shivangi | null | shivangi/CoLA_64_128_output | 2 | null | transformers | 24,707 | Entry not found |
shivangi/MRPC_64_128_output | 814bab5fe6da9ac1c4fe9659550646d282b45298 | 2021-05-20T05:51:42.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | shivangi | null | shivangi/MRPC_64_128_output | 2 | null | transformers | 24,708 | Entry not found |
shivangi/MRPC_output | 00212228e7496eb59343556f7a58a2159e0e8097 | 2021-05-20T05:52:41.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | shivangi | null | shivangi/MRPC_output | 2 | null | transformers | 24,709 | Entry not found |
shiyue/roberta-large-realsumm-by-examples-fold1 | f7c98f83c154dfe66dbfc06bb4e56c636629bec2 | 2021-09-23T19:04:21.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | shiyue | null | shiyue/roberta-large-realsumm-by-examples-fold1 | 2 | null | transformers | 24,710 | Entry not found |
shiyue/roberta-large-realsumm-by-examples-fold2 | 3974c2d8d76e6a19a6b587a83a686d3b9c345e49 | 2021-09-23T19:15:59.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | shiyue | null | shiyue/roberta-large-realsumm-by-examples-fold2 | 2 | null | transformers | 24,711 | Entry not found |
shiyue/roberta-large-realsumm-by-examples-fold4 | 3fc8644844de473bf0504c1bbc4f0898ddebff2b | 2021-09-23T19:21:26.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | shiyue | null | shiyue/roberta-large-realsumm-by-examples-fold4 | 2 | null | transformers | 24,712 | Entry not found |
shiyue/roberta-large-realsumm-by-examples-fold5 | b60589a11434d1d4914385675196d2bf68e8a1df | 2021-09-23T19:23:41.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | shiyue | null | shiyue/roberta-large-realsumm-by-examples-fold5 | 2 | null | transformers | 24,713 | Entry not found |
shiyue/roberta-large-tac09 | c0e65d7a70f8fa923a5f74558f7ec447a0c98fcf | 2021-09-22T04:05:10.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | shiyue | null | shiyue/roberta-large-tac09 | 2 | null | transformers | 24,714 | Entry not found |
shortcake/Carlos | c19d3ca2d3787dcbb13a8e6a29460813db19ccd6 | 2022-01-02T04:18:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | shortcake | null | shortcake/Carlos | 2 | null | transformers | 24,715 | Entry not found |
shpotes/xls-r-et-cv_8_0 | 38358378e39451096b28746f9889eb05f05a95cc | 2022-03-24T11:56:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"et",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shpotes | null | shpotes/xls-r-et-cv_8_0 | 2 | null | transformers | 24,716 | ---
language:
- et
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- et
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: xls-r-et-cv_8_0
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: et
metrics:
- name: Test WER
type: wer
value: 0.34180826781638346
- name: Test CER
type: cer
value: 0.07356192733576256
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: et
metrics:
- name: Test WER
type: wer
value: 34.18
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: et
metrics:
- name: Test WER
type: wer
value: 45.53
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: et
metrics:
- name: Test WER
type: wer
value: 54.41
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ET dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4623
- Wer: 0.3420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 72
- eval_batch_size: 72
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 144
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3082 | 12.5 | 500 | 0.3871 | 0.4907 |
| 0.1497 | 25.0 | 1000 | 0.4168 | 0.4278 |
| 0.1243 | 37.5 | 1500 | 0.4446 | 0.4220 |
| 0.0954 | 50.0 | 2000 | 0.4426 | 0.3946 |
| 0.0741 | 62.5 | 2500 | 0.4502 | 0.3800 |
| 0.0533 | 75.0 | 3000 | 0.4618 | 0.3653 |
| 0.0447 | 87.5 | 3500 | 0.4518 | 0.3461 |
| 0.0396 | 100.0 | 4000 | 0.4623 | 0.3420 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
sijpapi/my-awesome-model | 2128a84fa21236ba2aed8e981919accefc963215 | 2021-11-03T10:21:26.000Z | [
"pytorch",
"layoutlmv2",
"text-classification",
"transformers"
] | text-classification | false | sijpapi | null | sijpapi/my-awesome-model | 2 | null | transformers | 24,717 | Entry not found |
simjo/model1_test | a97be57323a2abeb8b8ad73acd616a364ea993cb | 2021-11-29T21:46:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"model-index"
] | text-classification | false | simjo | null | simjo/model1_test | 2 | null | transformers | 24,718 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: model1_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model1_test
This model is a fine-tuned version of [DaNLP/da-bert-hatespeech-detection](https://huggingface.co/DaNLP/da-bert-hatespeech-detection) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1816
- Accuracy: 0.9667
- F1: 0.3548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 150 | 0.1128 | 0.9667 | 0.2 |
| No log | 2.0 | 300 | 0.1666 | 0.9684 | 0.2963 |
| No log | 3.0 | 450 | 0.1816 | 0.9667 | 0.3548 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
simonmun/COHA1810s | 3b28e4308ed73b389b4aee9205e823c118541ec3 | 2021-05-20T21:29:41.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1810s | 2 | null | transformers | 24,719 | Entry not found |
simonmun/COHA1840s | c303235ea83ad07740657916c119f228f83454b1 | 2021-05-20T21:33:07.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1840s | 2 | null | transformers | 24,720 | Entry not found |
simonmun/COHA1870s | 3f753442c4e8e29eaff6147268c2fbc10455fb25 | 2021-05-20T21:35:29.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1870s | 2 | null | transformers | 24,721 | Entry not found |
simonmun/COHA1900s | d419d629bb26869507e8bbdbac49edfc512050a5 | 2021-05-20T21:38:56.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1900s | 2 | null | transformers | 24,722 | Entry not found |
simonmun/COHA1930s | ecc907efc15a13443b4929363103b467596a7616 | 2021-05-20T21:42:21.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1930s | 2 | null | transformers | 24,723 | Entry not found |
simonmun/COHA1950s | eaea5771ef10cd7198af65a8e0e1cff1fa82c993 | 2021-05-20T21:44:59.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1950s | 2 | null | transformers | 24,724 | Entry not found |
simonmun/COHA1980s | 7ed3d3d16e5226a87c5d8e7d7bdac42ed746d42c | 2021-05-20T21:48:12.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1980s | 2 | null | transformers | 24,725 | Entry not found |
simran-kh/muril-with-mlm-cased-temp | 7e2f3db68490c34374d85f0bb21aa1071faaecc5 | 2021-05-20T06:02:01.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simran-kh | null | simran-kh/muril-with-mlm-cased-temp | 2 | null | transformers | 24,726 | Entry not found |
sismetanin/rubert-ru-sentiment-liniscrowd | 3262804a0ffc941503ca5723dddbeb93ff5216a7 | 2021-05-20T06:08:37.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/rubert-ru-sentiment-liniscrowd | 2 | null | transformers | 24,727 | Entry not found |
sismetanin/xlm_roberta_base-ru-sentiment-krnd | 744cbebda0e994c24c925743cea5a039f693d7e2 | 2021-02-21T13:21:20.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/xlm_roberta_base-ru-sentiment-krnd | 2 | null | transformers | 24,728 | Entry not found |
sismetanin/xlm_roberta_base-ru-sentiment-liniscrowd | 886fbae77ac08c50f8d46016dccb316a72d51339 | 2021-02-21T15:24:43.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/xlm_roberta_base-ru-sentiment-liniscrowd | 2 | null | transformers | 24,729 | Entry not found |
sismetanin/xlm_roberta_large-ru-sentiment-krnd | 4c3c00a7a696b43013f9092a32380d0b48304c5e | 2021-02-21T13:22:21.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/xlm_roberta_large-ru-sentiment-krnd | 2 | null | transformers | 24,730 | Entry not found |
sismetanin/xlm_roberta_large-ru-sentiment-liniscrowd | 90750454412b4af2f7e7df744b27f61857a0cb41 | 2021-02-21T15:24:59.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/xlm_roberta_large-ru-sentiment-liniscrowd | 2 | null | transformers | 24,731 | Entry not found |
skillzzzzzy/urberto | 823d06be797622f3cc65f3e4ad3f6f6ef6fae835 | 2021-11-14T13:25:59.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | skillzzzzzy | null | skillzzzzzy/urberto | 2 | null | transformers | 24,732 | Entry not found |
skylord/wav2vec2-large-xlsr-greek-1 | 287140fdf93bf0b98151feedd41d4ccba473a59d | 2021-03-26T13:43:40.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"el",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | skylord | null | skylord/wav2vec2-large-xlsr-greek-1 | 2 | null | transformers | 24,733 | ---
language: el
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Greek XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice el
type: common_voice
args: el
metrics:
- name: Test WER
type: wer
value: 34.006258
---
# Wav2Vec2-Large-XLSR-53-Greek
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Greek using the [Common Voice](https://huggingface.co/datasets/common_voice), ... and ... dataset{s}. #TODO: replace {language} with your language, *e.g.* French and eventually add more datasets that were used and eventually remove common voice if model was not trained on common voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "el", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1")
model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Greek test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "el", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1")
model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 34.006258 %
## Training
The Common Voice `train`, `validation`, datasets were used for training as well as
The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
|
smangrul/xls-r-mr | 5f4984ffd982cb25d449f69f93a0fc623a1b3cbf | 2022-03-24T11:58:46.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mr",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | smangrul | null | smangrul/xls-r-mr | 2 | null | transformers | 24,734 | ---
language:
- mr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-mr
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice 8
args: mr
metrics:
- type: wer
value: 49.7
name: Test WER
- name: Test CER
type: cer
value: 11.11
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5319
- Wer: 0.5973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 3.3987 | 22.73 | 500 | 3.3586 | 1.0 |
| 2.0563 | 45.45 | 1000 | 1.0375 | 0.8428 |
| 1.283 | 68.18 | 1500 | 0.5563 | 0.6996 |
| 1.0308 | 90.91 | 2000 | 0.4922 | 0.6398 |
| 0.8803 | 113.64 | 2500 | 0.4949 | 0.6153 |
| 0.7581 | 136.36 | 3000 | 0.4932 | 0.5965 |
| 0.6681 | 159.09 | 3500 | 0.5133 | 0.5921 |
| 0.6191 | 181.82 | 4000 | 0.5281 | 0.5909 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
smartpim/k2t_ru_01 | da258d6ce8412e1f9df35c61aeaa263c0ef01d1b | 2022-02-08T12:36:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | smartpim | null | smartpim/k2t_ru_01 | 2 | null | transformers | 24,735 | Entry not found |
smeoni/roberta-large-clrp | e707952bedb1066f375845f78558d1e909994239 | 2021-06-23T07:37:47.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | smeoni | null | smeoni/roberta-large-clrp | 2 | null | transformers | 24,736 | Entry not found |
socrates/socrates2.0 | 826e863c1a61e8cb42912c35ceebc99190ad2c1d | 2022-01-14T16:07:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | socrates | null | socrates/socrates2.0 | 2 | null | transformers | 24,737 | The unexamined life is not worth living
|
soheeyang/rdr-question_encoder-single-trivia-base | 73cc73212a4d8553e749d1169826a8e5bde85dc0 | 2021-04-15T15:59:29.000Z | [
"pytorch",
"tf",
"dpr",
"feature-extraction",
"arxiv:2010.10999",
"transformers"
] | feature-extraction | false | soheeyang | null | soheeyang/rdr-question_encoder-single-trivia-base | 2 | null | transformers | 24,738 | # rdr-queston_encoder-single-nq-base
Reader-Distilled Retriever (`RDR`)
Sohee Yang and Minjoon Seo, [Is Retriever Merely an Approximator of Reader?](https://arxiv.org/abs/2010.10999), arXiv 2020
The paper proposes to distill the reader into the retriever so that the retriever absorbs the strength of the reader while keeping its own benefit. The model is a DPR retriever further finetuned using knowledge distillation from the DPR reader. Using this approach, the answer recall rate increases by a large margin, especially at small numbers of top-k.
This model is the question encoder of RDR trained solely on TriviaQA (single-trivia). This model is trained by the authors and is the official checkpoint of RDR.
## Performance
The following is the answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0.
For the values of DPR, those in parentheses are directly taken from the paper. The values without parentheses are reported using the reproduction of DPR that consists of [this question encoder](https://huggingface.co/soheeyang/dpr-question_encoder-single-trivia-base) and [this queston encoder](https://huggingface.co/soheeyang/dpr-question_encoder-single-trivia-base).
| | Top-K Passages | 1 | 5 | 20 | 50 | 100 |
|-------------|------------------|-----------|-----------|-----------|-----------|-----------|
|**TriviaQA Dev** | **DPR** | 54.27 | 71.11 | 79.53 | 82.72 | 85.07 |
| | **RDR (This Model)** | **61.84** | **75.93** | **82.56** | **85.35** | **87.00** |
|**TriviaQA Test**| **DPR** | 54.41 | 70.99 | 79.31 (79.4) | 82.90 | 84.99 (85.0) |
| | **RDR (This Model)** | **62.56** | **75.92** | **82.52** | **85.64** | **87.26** |
## How to Use
RDR shares the same architecture with DPR. Therefore, It uses `DPRQuestionEncoder` as the model class.
Using `AutoModel` does not properly detect whether the checkpoint is for `DPRContextEncoder` or `DPRQuestionEncoder`.
Therefore, please specify the exact class to use the model.
```python
from transformers import DPRQuestionEncoder, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("soheeyang/rdr-question_encoder-single-trivia-base")
question_encoder = DPRQuestionEncoder.from_pretrained("soheeyang/rdr-question_encoder-single-trivia-base")
data = tokenizer("question comes here", return_tensors="pt")
question_embedding = question_encoder(**data).pooler_output # embedding vector for question
```
|
songqian/first_model | 5c1a810b872c6fab475b12c515447c76ed1a4adf | 2021-11-02T14:20:05.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | songqian | null | songqian/first_model | 2 | null | transformers | 24,739 | |
soniakris123/soniakris | ad22d0b28427f79824407cfea8857bf38e7a8ae9 | 2021-05-20T07:10:32.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | soniakris123 | null | soniakris123/soniakris | 2 | null | transformers | 24,740 | Entry not found |
sontn122/xlm-roberta-large-finetuned-squad-v2 | 83ff0c3bb41c7342f255f0bc9ff116b2eaed85fd | 2021-10-11T13:30:06.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | sontn122 | null | sontn122/xlm-roberta-large-finetuned-squad-v2 | 2 | null | transformers | 24,741 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: xlm-roberta-large-finetuned-squad-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-squad-v2
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.029 | 1.0 | 950 | 0.9281 |
| 0.9774 | 2.0 | 1900 | 0.6130 |
| 0.6781 | 3.0 | 2850 | 0.4627 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
soroush/model | cfc3a0f497e719aeb6d38b6e7694c08df3f0697f | 2020-07-11T18:01:22.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | soroush | null | soroush/model | 2 | null | transformers | 24,742 | Entry not found |
spandan96/T5_SEO_Title_Generator | 7ef37c1f060a99fc614f7d7f225920574ff22238 | 2021-06-30T16:07:12.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | spandan96 | null | spandan96/T5_SEO_Title_Generator | 2 | null | transformers | 24,743 | Entry not found |
sparki/kinkyfurs-gpt2 | dfcc8850a0a48858ae1561e6f363a2a9c06c88fc | 2021-10-28T16:26:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"license:mit"
] | text-generation | false | sparki | null | sparki/kinkyfurs-gpt2 | 2 | null | transformers | 24,744 | ---
language: en
license: mit
---
Import it using pipeline
from transformers import pipeline
text_generation = pipeline('text-generation' , model='sparki/kinkyfurs-gpt2')
Then use it
prefix_text = input()
text_generation(prefix_text, max_length=50, num_beams=5,no_repeat_ngram_size=2,early_stopping=True)
|
sripadh8/distilbert-base-uncased | fbaa40812c0555175d38df7c750eab5d3186aba0 | 2021-05-20T08:00:43.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | sripadh8 | null | sripadh8/distilbert-base-uncased | 2 | null | transformers | 24,745 | Entry not found |
sshasnain/wav2vec2-xls-r-300m-bangla-command-synthetic | cafc98d738be39722b4c5cb2777043241834d553 | 2022-02-14T08:39:07.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sshasnain | null | sshasnain/wav2vec2-xls-r-300m-bangla-command-synthetic | 2 | null | transformers | 24,746 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-300m-bangla-command-synthetic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-bangla-command-synthetic
This model is a fine-tuned version of [sshasnain/wav2vec2-xls-r-300m-bangla-command](https://huggingface.co/sshasnain/wav2vec2-xls-r-300m-bangla-command) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0254
- eval_wer: 0.4311
- eval_runtime: 2.5036
- eval_samples_per_second: 76.689
- eval_steps_per_second: 9.586
- epoch: 35.71
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sshleifer/bart-large-fp32 | 0a64a99d65087b3e4dec59ae20869194bac8346c | 2020-09-22T16:20:39.000Z | [
"pytorch",
"rust",
"bart",
"feature-extraction",
"transformers"
] | feature-extraction | false | sshleifer | null | sshleifer/bart-large-fp32 | 2 | null | transformers | 24,747 | Entry not found |
sshleifer/dev-ft-en-ro | 75dcc68256bb83b4977d5aa1da69d895e1bdbced | 2020-07-21T19:37:34.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/dev-ft-en-ro | 2 | null | transformers | 24,748 | Entry not found |
sshleifer/distill-mbart-en-ro-12-6 | d16a65b2610bfe828da53535476efc6d5aebfd7d | 2021-03-16T01:57:13.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/distill-mbart-en-ro-12-6 | 2 | null | transformers | 24,749 | Entry not found |
sshleifer/student-pegasus-xsum-6-6 | 7d6583f33427337a083087cd16d719a24eb5656c | 2020-09-11T04:04:22.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student-pegasus-xsum-6-6 | 2 | null | transformers | 24,750 | Entry not found |
sshleifer/student_enro_avg_12_1 | ac8a206d8af22ae31594c3f0e64f46cb5537e493 | 2021-06-14T09:27:51.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_enro_avg_12_1 | 2 | null | transformers | 24,751 | Entry not found |
sshleifer/student_enro_avg_12_6 | 2d3c893504f0e825980372919c89b3640695f528 | 2020-07-18T20:16:27.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_enro_avg_12_6 | 2 | null | transformers | 24,752 | Entry not found |
sshleifer/student_enro_sum_12_2 | e7ff4566e259f41f5ddcb95db6e3fd743f31e462 | 2020-07-18T20:25:02.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_enro_sum_12_2 | 2 | null | transformers | 24,753 | Entry not found |
sshleifer/student_enro_sum_12_3 | d70cd4b75141ad96c19bf8257e18bf3d02ba2b33 | 2020-07-18T20:25:05.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_enro_sum_12_3 | 2 | null | transformers | 24,754 | Entry not found |
sshleifer/student_marian_en_ro_1_1 | 21c89559a52daeffa37bfe28611255b50c9b7dff | 2020-08-26T02:19:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_marian_en_ro_1_1 | 2 | null | transformers | 24,755 | Entry not found |
sshleifer/student_marian_en_ro_6_4 | 6596503b9f5b91e767fa3306a4afab0a884a74a7 | 2020-08-26T05:14:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_marian_en_ro_6_4 | 2 | null | transformers | 24,756 | Entry not found |
sshleifer/student_mbart_en_ro_12_1 | 0be19fdf944e9aa3b3f2d651d1a412869beb6cd7 | 2020-07-15T15:14:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_mbart_en_ro_12_1 | 2 | null | transformers | 24,757 | Entry not found |
sshleifer/student_mbart_en_ro_12_6 | fe2b55f76ade6d8692a73f06b7a8af2fa76257ef | 2020-07-15T15:14:57.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_mbart_en_ro_12_6 | 2 | null | transformers | 24,758 | Entry not found |
sshleifer/student_mbart_en_ro_1_1 | 03009b26cd280472927a109247d6c4777dd79bce | 2020-07-15T15:27:32.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_mbart_en_ro_1_1 | 2 | null | transformers | 24,759 | Entry not found |
sshleifer/student_pegasus_xsum_16_8 | e0d5f3de379e6acb9c031c9cd054b45ed2e6c13c | 2020-08-27T21:23:21.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_pegasus_xsum_16_8 | 2 | null | transformers | 24,760 | Entry not found |
sshleifer/student_xsum_12_6 | 64fe9ab850ec03e682ca79ed136dff6c5b7d69fe | 2021-06-14T09:51:51.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_xsum_12_6 | 2 | null | transformers | 24,761 | Entry not found |
sshleifer/student_xsum_6_12 | c8c21566a8f466ee57d5a56788dd34ba8aab2617 | 2021-06-14T10:08:37.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_xsum_6_12 | 2 | null | transformers | 24,762 | Entry not found |
ssun32/bert_base_nli_turkle | 4ad551f5bc382c87c6e9d5f62b7f43bbb72f0184 | 2021-05-20T07:13:17.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | ssun32 | null | ssun32/bert_base_nli_turkle | 2 | null | transformers | 24,763 | Entry not found |
stasvmk/tnkff_pulse_ru_gpt | 438b3c0d9cb59f1084d0124336615a86f6474923 | 2022-01-09T20:11:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | stasvmk | null | stasvmk/tnkff_pulse_ru_gpt | 2 | null | transformers | 24,764 | Entry not found |
stefan-it/electra-base-gc4-64k-900000-cased-discriminator | 96f12b233b9249e0622f14f86be4c53bf6d9c927 | 2021-05-01T11:11:31.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"de",
"dataset:german-nlp-group/german_common_crawl",
"transformers",
"license:mit"
] | null | false | stefan-it | null | stefan-it/electra-base-gc4-64k-900000-cased-discriminator | 2 | null | transformers | 24,765 | ---
language: de
license: mit
datasets:
- german-nlp-group/german_common_crawl
---
# GC4LM: A Colossal (Biased) language model for German
This repository presents a colossal (and biased) language model for German trained on the recently released
["German colossal, clean Common Crawl corpus"](https://german-nlp-group.github.io/projects/gc4-corpus.html) (GC4),
with a total dataset size of ~844GB.
---
**Disclaimer**: the presented and trained language models in this repository are for **research only** purposes.
The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, the language models can
be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race,
ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended
to read:
[On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf)
from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.
The aim of the released checkpoints is to boost research on large pre-trained language models for German, especially
for identifying biases and how to prevent them, as most research is currently done only for English.
---
Please use the new GitHub Discussions feature in order to discuss or present further research questions.
Feel free to use `#gc4lm` on Twitter 🐦.
|
stonkgs/protstonkgs | b0848b310bdcd238ce948cb367f6bc887d90bcf3 | 2021-10-13T14:45:38.000Z | [
"pytorch",
"big_bird",
"transformers"
] | null | false | stonkgs | null | stonkgs/protstonkgs | 2 | null | transformers | 24,766 | Entry not found |
stonkgs/stonkgs-150k | bc5ba84b3732f3b93dbd844a0f7e6437c25ff8c6 | 2021-07-26T12:00:51.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | stonkgs | null | stonkgs/stonkgs-150k | 2 | null | transformers | 24,767 | Entry not found |
subbareddyiiit/bert_csl_gold8k | d8ad6b5d7157609384c1cdf70cff17c759e1808f | 2021-05-20T07:17:19.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | subbareddyiiit | null | subbareddyiiit/bert_csl_gold8k | 2 | null | transformers | 24,768 | hello
|
sultan/ArabicTransformer-large-encoder | 83ba5d6f0f279cef48bd197af069225e0b73f1e6 | 2021-10-08T05:52:28.000Z | [
"pytorch",
"funnel",
"feature-extraction",
"transformers"
] | feature-extraction | false | sultan | null | sultan/ArabicTransformer-large-encoder | 2 | null | transformers | 24,769 | Entry not found |
sunqq2008/sunqq-bert_finetunning | d24d9048d44e4996dbf972fcb136954674c5f37e | 2021-07-20T01:48:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | sunqq2008 | null | sunqq2008/sunqq-bert_finetunning | 2 | null | transformers | 24,770 | Entry not found |
sv/gpt2-finetuned-nft-shakes-seuss-2 | 4c5e85a37389303b49f00a7669419c303d82fab2 | 2021-09-07T06:05:36.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | sv | null | sv/gpt2-finetuned-nft-shakes-seuss-2 | 2 | null | transformers | 24,771 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: gpt2-finetuned-nft-shakes-seuss-2
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-nft-shakes-seuss-2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.3454 | 1.0 | 1490 | 4.1027 |
| 4.0534 | 2.0 | 2980 | 3.9857 |
| 3.9384 | 3.0 | 4470 | 3.9547 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
swapnil2911/DialoGPT-small-arya | 215005d4d49b19eb39585e9de00d85b6df49be61 | 2021-06-09T06:27:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | swapnil2911 | null | swapnil2911/DialoGPT-small-arya | 2 | null | transformers | 24,772 | pipeline_tag:conversational |
swapnil2911/DialoGPT-test-arya | 782e75dce1333b181643020dc8ebf0c582f76cc3 | 2021-06-09T06:19:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | swapnil2911 | null | swapnil2911/DialoGPT-test-arya | 2 | null | transformers | 24,773 | pipeline_tag: conversational |
swcrazyfan/KingJamesify-T5-Base | ae67833372d0d3b0826d83ee5fd2c69fe61988b5 | 2022-02-18T03:46:40.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"transformers",
"Bible",
"KJV",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | swcrazyfan | null | swcrazyfan/KingJamesify-T5-Base | 2 | null | transformers | 24,774 | ---
language: en
license: apache-2.0
tags:
- Bible
- KJV
---
# King Jamesify
This seq2seq model is my first experiment for "translating" modern English to the famous KJV Bible style.
The model is based on Google's "T5 Efficient Base" model. It was fine-tuned for 3 epochs on a NET to KJV dataset. |
swcrazyfan/TB-2.7B | c3048d3347f0126bca98cad498ba51c8d7c58088 | 2021-07-04T10:49:42.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | swcrazyfan | null | swcrazyfan/TB-2.7B | 2 | null | transformers | 24,775 | Entry not found |
swcrazyfan/TEFL-2.7B-6K | 2279d5e96c5ff2578d2add15c988905d010b010a | 2021-06-05T07:53:03.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | swcrazyfan | null | swcrazyfan/TEFL-2.7B-6K | 2 | null | transformers | 24,776 | Entry not found |
sylviachency/distilbert-base-uncased-finetuned-cola | 3f23d5bed645b2f7a35596b26b371381b6bb458f | 2022-02-12T06:48:04.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | sylviachency | null | sylviachency/distilbert-base-uncased-finetuned-cola | 2 | null | transformers | 24,777 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5235221651747541
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9155
- Matthews Correlation: 0.5235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5275 | 1.0 | 535 | 0.5174 | 0.4181 |
| 0.3496 | 2.0 | 1070 | 0.5617 | 0.4857 |
| 0.2359 | 3.0 | 1605 | 0.6661 | 0.5029 |
| 0.1701 | 4.0 | 2140 | 0.8052 | 0.5091 |
| 0.1266 | 5.0 | 2675 | 0.9155 | 0.5235 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
tal-yifat/injury-report-distilgpt2-test | e16d886daafdb9eab7d4670251efcbfef507d720 | 2021-10-18T02:15:31.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | tal-yifat | null | tal-yifat/injury-report-distilgpt2-test | 2 | null | transformers | 24,778 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: injury-report-distilgpt2-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# injury-report-distilgpt2-test
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 380 | 3.6525 |
| 3.9116 | 2.0 | 760 | 3.5507 |
| 3.6015 | 3.0 | 1140 | 3.5243 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
tanmoyio/MiniVec | 5c6b5feadb73e59973e9bd36cfa4f60934ee366a | 2022-02-08T17:04:14.000Z | [
"pytorch"
] | null | false | tanmoyio | null | tanmoyio/MiniVec | 2 | null | null | 24,779 | Entry not found |
tareknaous/bert2bert-empathetic-dialogues | f062773518b610e8ba88538b94dde51d319f16bf | 2022-02-21T08:56:00.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tareknaous | null | tareknaous/bert2bert-empathetic-dialogues | 2 | null | transformers | 24,780 | Entry not found |
tau/t5-v1_1-large-rss | 5cf6eccfd46682758bc2216777c2c177adcc21e0 | 2021-08-20T17:35:51.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"arxiv:2108.05857",
"arxiv:2101.00438",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/t5-v1_1-large-rss | 2 | null | transformers | 24,781 | ---
language: en
datasets:
- c4
- wikipedia
metrics:
- f1
---
# T5-V1.1-large-rss
This model is [T5-v1.1-large](https://huggingface.co/google/t5-v1_1-large) finetuned on RSS dataset. The model was finetuned as part of
["How Optimal is Greedy Decoding for Extractive Question Answering?"](https://arxiv.org/abs/2108.05857), while the RSS pretraining method was introduced in [this paper](https://arxiv.org/pdf/2101.00438.pdf).
## Model description
The original [T5-v1.1-large](https://huggingface.co/google/t5-v1_1-large) was only pre-trained on C4 excluding any supervised training. Our version is further trained on Rucurrent Span Selection scheme (RSS), using a sample from the dataset used to pretrain [Splinter](tau/splinter-large):
* contexts with a span occurring more than once are detected
* a single instance of the recurring span is maked
* the model is trained (teacher forcing) to predict the masked span
This training scheme naturally matches the extractive question answering task.
During training time, the masked span is replaced with `<extra_id_0>` and the labels are formatted as `<extra_id_0>span<extra_id_0>`. Unlike [Splinter](tau/splinter-large), only one span is mask at a time.
## Intended uses & limitations
This model naturally fits tasks where a span from a context is intended to be copied, like extractive question answering.
This checkpoint is primarily aimed to be used in zero-shot setting - further fine-tuning it on an annotated dataset gives equal results to those of the original T5-v1.1-large.
### How to use
You can use this model directly but it is recommended to format the input to be aligned with that of the training scheme and as a text-question context:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained('tau/t5-v1_1-large-rss')
tokenizer = AutoTokenizer.from_pretrained('tau/t5-v1_1-large-rss')
passage = 'Barack Hussein Obama II is an American politician and attorney who served as the 44th president of the United States from 2009 to 2017. '
question = 'When was Obama inaugurated?'
text = f'Text: {passage}.\nQuestion: {question}\nAnswer:{tokenizer.additional_special_tokens[0]}.'
encoded_input = tokenizer(text, return_tensors='pt')
output_ids = model.generate(input_ids=encoded_input.input_ids, attention_mask=encoded_input.attention_mask,
eos_token_id=tokenizer.additional_special_tokens_ids[1], num_beams=1, max_length=512, min_length=3)
tokenizer.decode(output_ids[0])
```
The generated answer is then `"<pad><extra_id_0> 2009<extra_id_1>"`, while the one generated by the original [T5-v1.1-large](https://huggingface.co/google/t5-v1_1-large) is `"<pad><extra_id_0> On January 20, 2009<extra_id_1>"` - a correct yet non-extractive answer.
### Limitations and bias
Although using the model with greedy decoding tends toward extracted outputs, is may sometimes produce non-extracted ones - may it be different casing or a whole different string (or substring) that may bear another semantic meaning.
### Pretraining
The model was finetuned with 100,000 rss-examples for 3 epochs using Adafactor optimizer with constant learning rate of 5e-5.
## Evaluation results
Evaluated over few-shot QA in a zero-shot setting (no finetuning on annotated examples):
|Model \ Dataset| SQuAD |TriviaQA | NaturalQs | NewsQA | SearchQA | HotpotQA | BioASQ | TextbookQA|
|:-------------:|:-----:|:-------:|:---------:|:------:|:--------:|:--------:|:------:|:---------:|
|T5 | 50.4 | 61.7 | 42.1 | 19.2 | 24.0 | 43.3 | 55.5 | 17.8 |
|T5-rss | 71.4 | 69.3 | 57.2 | 43.2 | 29.7 | 59.0 | 65.5 | 39.0 |
The gap between the two models diminishes as more training examples are introduced, for additional result see the [paper]((https://arxiv.org/abs/2108.05857).
### BibTeX entry and citation info
```bibtex
@inproceedings{ram-etal-2021-shot,
title = "Few-Shot Question Answering by Pretraining Span Selection",
author = "Ram, Ori and
Kirstain, Yuval and
Berant, Jonathan and
Globerson, Amir and
Levy, Omer",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.239",
doi = "10.18653/v1/2021.acl-long.239",
pages = "3066--3079",
},
@misc{castel2021optimal,
title={How Optimal is Greedy Decoding for Extractive Question Answering?},
author={Or Castel and Ori Ram and Avia Efrat and Omer Levy},
year={2021},
eprint={2108.05857},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
tbochens/test-train | 89ce140dca9103e07a8550410652b705fd8cbbc0 | 2021-12-29T19:25:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | tbochens | null | tbochens/test-train | 2 | null | transformers | 24,782 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: test-train
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8455882352941176
- name: F1
type: f1
value: 0.8926746166950595
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-train
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7268
- Accuracy: 0.8456
- F1: 0.8927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.3470 | 0.8627 | 0.9014 |
| 0.4987 | 2.0 | 918 | 0.5782 | 0.8382 | 0.8914 |
| 0.2796 | 3.0 | 1377 | 0.7268 | 0.8456 | 0.8927 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tdeme/twitter_bias_model | 4fc4f213a40c91016eea1f7539d9208d13b25771 | 2021-08-05T21:40:47.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | tdeme | null | tdeme/twitter_bias_model | 2 | null | transformers | 24,783 | Entry not found |
textattack/albert-base-v2-WNLI | a744b4cca9bb8b5251508e8f14a982379b42084c | 2020-07-06T16:33:17.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/albert-base-v2-WNLI | 2 | null | transformers | 24,784 | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 2e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.5915492957746479, as measured by the
eval set accuracy, found after 0 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/distilbert-base-cased-STS-B | a2bedc49081149ae315d7117481b1119fc7c613d | 2020-06-09T16:46:42.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/distilbert-base-cased-STS-B | 2 | null | transformers | 24,785 | Entry not found |
textattack/facebook-bart-large-MRPC | a818d6c8eedf33f85bd9955f445aa7c4de98d324 | 2020-06-09T16:49:43.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/facebook-bart-large-MRPC | 2 | null | transformers | 24,786 | Entry not found |
tgood/bigbird-roberta-base | 3992e460426871ae5068ab1a90f39d7bf218db69 | 2022-01-28T18:28:37.000Z | [
"pytorch",
"big_bird",
"feature-extraction",
"transformers"
] | feature-extraction | false | tgood | null | tgood/bigbird-roberta-base | 2 | null | transformers | 24,787 | Entry not found |
thatdramebaazguy/movie-roberta-MITmovie-squad | 88a895b955f85dca73f20da15f601af847eca32e | 2022-07-01T19:02:00.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"question-answering",
"English",
"dataset:imdb",
"dataset:cornell_movie_dialogue",
"dataset:MIT Movie",
"transformers",
"roberta-base",
"qa",
"movies",
"license:cc-by-4.0",
"autotrain_compatible"
] | question-answering | false | thatdramebaazguy | null | thatdramebaazguy/movie-roberta-MITmovie-squad | 2 | 1 | transformers | 24,788 | ---
datasets:
- imdb
- cornell_movie_dialogue
- MIT Movie
language:
- English
thumbnail:
tags:
- roberta
- roberta-base
- question-answering
- qa
- movies
license: cc-by-4.0
---
# roberta-base + DAPT + Task Transfer for Domain-Specific QA
Objective:
This is Roberta Base with Domain Adaptive Pretraining on Movie Corpora --> Then trained for the NER task using MIT Movie Dataset --> Then a changed head to do the SQuAD Task. This makes a QA model capable of answering questions in the movie domain, with additional information coming from a different task (NER - Task Transfer).
https://huggingface.co/thatdramebaazguy/movie-roberta-base was used as the MovieRoberta.
```
model_name = "thatdramebaazguy/movie-roberta-MITmovie-squad"
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="question-answering")
```
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** NER --> QA
**Training data:** imdb, polarity movie data, cornell_movie_dialogue, 25mlens movie names, MIT Movie, SQuADv1
**Eval data:** MoviesQA (From https://github.com/ibm-aur-nlp/domain-specific-QA)
**Infrastructure**: 4x Tesla v100
**Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/scripts/shell_scripts/movieR_NER_squad.sh)
## Hyperparameters
```
Num examples = 88567
Num Epochs = 3
Instantaneous batch size per device = 32
Total train batch size (w. parallel, distributed & accumulation) = 128
```
## Performance
### Eval on SQuADv1
- eval_samples = 10790
- exact_match = 83.0274
- f1 = 90.1615
### Eval on MoviesQA
- eval_samples = 5032
- exact_match = 51.64944
- f1 = 65.53983
Github Repo:
- [Domain-Adaptation Project](https://github.com/adityaarunsinghal/Domain-Adaptation/)
---
|
thatdramebaazguy/movie-roberta-base | cc2c9085e9639921e2db8ec0bdbd1aff7f7f945f | 2022-07-01T19:23:33.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"English",
"dataset:imdb",
"dataset:cornell_movie_dialogue",
"dataset:polarity_movie_data",
"dataset:25mlens_movie_data",
"transformers",
"roberta-base",
"masked-language-modeling",
"masked-lm",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | thatdramebaazguy | null | thatdramebaazguy/movie-roberta-base | 2 | 1 | transformers | 24,789 | ---
datasets:
- imdb
- cornell_movie_dialogue
- polarity_movie_data
- 25mlens_movie_data
language:
- English
thumbnail:
tags:
- roberta
- roberta-base
- masked-language-modeling
- masked-lm
license: cc-by-4.0
---
# roberta-base for MLM
Objective: To make a Roberta Base for the Movie Domain by using various Movie Datasets as simple text for Masked Language Modeling.
This is the Movie Roberta to be used in Movie Domain applications.
```
model_name = "thatdramebaazguy/movie-roberta-base"
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="Fill-Mask")
```
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Fill-Mask
**Training data:** imdb, polarity movie data, cornell_movie_dialogue, 25mlens movie names
**Eval data:** imdb, polarity movie data, cornell_movie_dialogue, 25mlens movie names
**Infrastructure**: 4x Tesla v100
**Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/scripts/shell_scripts/train_movie_roberta.sh)
## Hyperparameters
```
Num examples = 4767233
Num Epochs = 2
Instantaneous batch size per device = 20
Total train batch size (w. parallel, distributed & accumulation) = 80
Gradient Accumulation steps = 1
Total optimization steps = 119182
eval_loss = 1.6153
eval_samples = 20573
perplexity = 5.0296
learning_rate=5e-05
n_gpu = 4
```
## Performance
perplexity = 5.0296
Some of my work:
- [Domain-Adaptation Project](https://github.com/adityaarunsinghal/Domain-Adaptation/)
---
|
thatdramebaazguy/movie-roberta-squad | 00d41ff842d7225f201df2f1c79c70f633bd75de | 2022-07-01T18:53:05.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"question-answering",
"English",
"dataset:imdb",
"dataset:cornell_movie_dialogue",
"dataset:SQuAD",
"transformers",
"roberta-base",
"qa",
"movies",
"license:cc-by-4.0",
"autotrain_compatible"
] | question-answering | false | thatdramebaazguy | null | thatdramebaazguy/movie-roberta-squad | 2 | 1 | transformers | 24,790 | ---
datasets:
- imdb
- cornell_movie_dialogue
- SQuAD
language:
- English
thumbnail:
tags:
- roberta
- roberta-base
- question-answering
- qa
- movies
license: cc-by-4.0
---
# roberta-base + DAPT + Domain-Specific QA
Objective:
This is Roberta Base with Domain Adaptive Pretraining on Movie Corpora --> Then a changed head to do the SQuAD Task. This makes a QA model capable of answering questions in the movie domain.
https://huggingface.co/thatdramebaazguy/movie-roberta-base was used as the MovieRoberta.
```
model_name = "thatdramebaazguy/movie-roberta-squad"
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="question-answering")
```
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** QA
**Training data:** imdb, polarity movie data, cornell_movie_dialogue, 25mlens movie names, SQuADv1
**Eval data:** MoviesQA (From https://github.com/ibm-aur-nlp/domain-specific-QA)
**Infrastructure**: 1x Tesla v100
**Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/scripts/shell_scripts/train_movieR_just_squadv1.sh)
## Hyperparameters
```
Num examples = 88567
Num Epochs = 10
Instantaneous batch size per device = 32
Total train batch size (w. parallel, distributed & accumulation) = 32
```
## Performance
### Eval on MoviesQA
- eval_samples = 5032
- exact_match = 51.64944
- f1 = 65.53983
### Eval on SQuADv1
- exact_match = 81.23936
- f1 = 89.27827
Github Repo:
- [Domain-Adaptation Project](https://github.com/adityaarunsinghal/Domain-Adaptation/)
---
|
this-is-real/easybart | a54206758c06db53a5604a2991f799598a12a210 | 2021-12-22T14:22:51.000Z | [
"pytorch",
"bart",
"transformers"
] | null | false | this-is-real | null | this-is-real/easybart | 2 | null | transformers | 24,791 | |
this-is-real/mrc-pretrained-roberta-large-1 | 2895d130700101836b517a1d336ca448b283aa88 | 2021-11-02T13:53:49.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | this-is-real | null | this-is-real/mrc-pretrained-roberta-large-1 | 2 | null | transformers | 24,792 | - model: klue/roberta-large
- learning rate: 1e-4
- lr scheduler type: linear
- weight decay: 0.01
- epochs: 5
- checkpoint: 2700 |
tiennvcs/bert-base-uncased-finetuned-vi-infovqa | 2fa675b15d869543a727542c92f68cf70a98fe30 | 2021-12-27T09:57:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | tiennvcs | null | tiennvcs/bert-base-uncased-finetuned-vi-infovqa | 2 | null | transformers | 24,793 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-vi-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-vi-infovqa
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.21 | 100 | 4.2058 |
| No log | 0.43 | 200 | 4.0210 |
| No log | 0.64 | 300 | 4.0454 |
| No log | 0.85 | 400 | 3.7557 |
| 4.04 | 1.07 | 500 | 3.8257 |
| 4.04 | 1.28 | 600 | 3.7713 |
| 4.04 | 1.49 | 700 | 3.6075 |
| 4.04 | 1.71 | 800 | 3.6155 |
| 4.04 | 1.92 | 900 | 3.5470 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tiennvcs/bert-large-uncased-finetuned-vi-infovqa | 05bd8ede52b40f920c0bd7c6e7229ca5238ee390 | 2021-12-27T08:30:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | tiennvcs | null | tiennvcs/bert-large-uncased-finetuned-vi-infovqa | 2 | null | transformers | 24,794 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-uncased-finetuned-vi-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-vi-infovqa
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.4878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.11 | 100 | 4.6256 |
| No log | 0.21 | 200 | 4.4042 |
| No log | 0.32 | 300 | 5.0021 |
| No log | 0.43 | 400 | 4.2825 |
| 4.6758 | 0.53 | 500 | 4.3886 |
| 4.6758 | 0.64 | 600 | 4.2519 |
| 4.6758 | 0.75 | 700 | 4.2977 |
| 4.6758 | 0.85 | 800 | 3.9916 |
| 4.6758 | 0.96 | 900 | 4.1650 |
| 4.1715 | 1.07 | 1000 | 4.5001 |
| 4.1715 | 1.17 | 1100 | 4.0898 |
| 4.1715 | 1.28 | 1200 | 4.1623 |
| 4.1715 | 1.39 | 1300 | 4.3271 |
| 4.1715 | 1.49 | 1400 | 3.9661 |
| 3.7926 | 1.6 | 1500 | 3.8727 |
| 3.7926 | 1.71 | 1600 | 3.8934 |
| 3.7926 | 1.81 | 1700 | 3.7262 |
| 3.7926 | 1.92 | 1800 | 3.7701 |
| 3.7926 | 2.03 | 1900 | 3.7653 |
| 3.5041 | 2.13 | 2000 | 3.9261 |
| 3.5041 | 2.24 | 2100 | 4.0915 |
| 3.5041 | 2.35 | 2200 | 4.0348 |
| 3.5041 | 2.45 | 2300 | 4.0212 |
| 3.5041 | 2.56 | 2400 | 4.4653 |
| 2.8475 | 2.67 | 2500 | 4.2959 |
| 2.8475 | 2.77 | 2600 | 4.1039 |
| 2.8475 | 2.88 | 2700 | 3.8037 |
| 2.8475 | 2.99 | 2800 | 3.7552 |
| 2.8475 | 3.09 | 2900 | 4.2476 |
| 2.5488 | 3.2 | 3000 | 4.6716 |
| 2.5488 | 3.3 | 3100 | 4.7058 |
| 2.5488 | 3.41 | 3200 | 4.6266 |
| 2.5488 | 3.52 | 3300 | 4.5697 |
| 2.5488 | 3.62 | 3400 | 5.1017 |
| 2.0347 | 3.73 | 3500 | 4.6254 |
| 2.0347 | 3.84 | 3600 | 4.4822 |
| 2.0347 | 3.94 | 3700 | 4.9413 |
| 2.0347 | 4.05 | 3800 | 5.3600 |
| 2.0347 | 4.16 | 3900 | 5.7323 |
| 1.6566 | 4.26 | 4000 | 5.8822 |
| 1.6566 | 4.37 | 4100 | 6.0173 |
| 1.6566 | 4.48 | 4200 | 5.6688 |
| 1.6566 | 4.58 | 4300 | 6.0617 |
| 1.6566 | 4.69 | 4400 | 6.6631 |
| 1.3348 | 4.8 | 4500 | 6.0290 |
| 1.3348 | 4.9 | 4600 | 6.2455 |
| 1.3348 | 5.01 | 4700 | 6.0963 |
| 1.3348 | 5.12 | 4800 | 7.0983 |
| 1.3348 | 5.22 | 4900 | 7.5483 |
| 1.0701 | 5.33 | 5000 | 7.7187 |
| 1.0701 | 5.44 | 5100 | 7.4630 |
| 1.0701 | 5.54 | 5200 | 7.1394 |
| 1.0701 | 5.65 | 5300 | 7.0703 |
| 1.0701 | 5.76 | 5400 | 7.5611 |
| 0.9414 | 5.86 | 5500 | 7.6038 |
| 0.9414 | 5.97 | 5600 | 7.4878 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tiennvcs/distilbert-base-uncased-finetuned-squad | ff90c91d9f1be5e8aa7a96d601955478057032ed | 2021-10-19T02:41:19.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | tiennvcs | null | tiennvcs/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 24,795 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
tizaino/bert-base-uncased-finetuned-Pisa | 19996d1c6498db77a33ca1b7126cde1fa392e9b2 | 2022-02-09T18:49:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | tizaino | null | tizaino/bert-base-uncased-finetuned-Pisa | 2 | null | transformers | 24,796 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-Pisa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-Pisa
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 1.4146 |
| No log | 2.0 | 18 | 1.1013 |
| No log | 3.0 | 27 | 1.1237 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.7.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tk3879110/bert_finetuning_test | c905380a2306c35a725b1a01471e7aa45a46103b | 2021-05-20T07:52:28.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | tk3879110 | null | tk3879110/bert_finetuning_test | 2 | null | transformers | 24,797 | Entry not found |
tmills/event-thyme-colon | 1dbb41b929ad56d9a75f57d6cfba853cb0f75381 | 2022-05-02T20:50:17.000Z | [
"pytorch",
"cnlpt",
"transformers"
] | null | false | tmills | null | tmills/event-thyme-colon | 2 | null | transformers | 24,798 | Entry not found |
tnsaiexp/tns-gpt-neo-125M | 0cbb08f752c64bf018e97b1a8adc70928133c5b0 | 2021-12-09T13:17:09.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | tnsaiexp | null | tnsaiexp/tns-gpt-neo-125M | 2 | null | transformers | 24,799 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.