modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
tbosse/bert-base-german-cased-noisy-pretrain-fine-tuned_v1.2 | 67d323d69c21de5bcebfa2cec5703dcd2e357a2e | 2022-05-28T00:44:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-noisy-pretrain-fine-tuned_v1.2 | 12 | null | transformers | 10,800 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-noisy-pretrain-fine-tuned_v1.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-noisy-pretrain-fine-tuned_v1.2
This model is a fine-tuned version of [tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.2](https://huggingface.co/tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2810
- Precision: 0.7874
- Recall: 0.7514
- F1: 0.7690
- Accuracy: 0.9147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 33 | 0.3078 | 0.7675 | 0.5943 | 0.6699 | 0.8842 |
| No log | 2.0 | 66 | 0.2535 | 0.7729 | 0.7486 | 0.7605 | 0.9073 |
| No log | 3.0 | 99 | 0.2417 | 0.7714 | 0.7714 | 0.7714 | 0.9119 |
| No log | 4.0 | 132 | 0.2532 | 0.8031 | 0.7343 | 0.7672 | 0.9142 |
| No log | 5.0 | 165 | 0.2675 | 0.7834 | 0.7543 | 0.7686 | 0.9142 |
| No log | 6.0 | 198 | 0.2750 | 0.7870 | 0.76 | 0.7733 | 0.9159 |
| No log | 7.0 | 231 | 0.2810 | 0.7874 | 0.7514 | 0.7690 | 0.9147 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tbosse/bert-base-german-cased-noisy-pretrain-fine-tuned_v1.1 | 3bf89cfedc35887b8791d0513203533b91cd7a23 | 2022-05-28T00:55:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-noisy-pretrain-fine-tuned_v1.1 | 12 | null | transformers | 10,801 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-noisy-pretrain-fine-tuned_v1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-noisy-pretrain-fine-tuned_v1.1
This model is a fine-tuned version of [tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.1](https://huggingface.co/tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2742
- Precision: 0.8072
- Recall: 0.7657
- F1: 0.7859
- Accuracy: 0.9217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 33 | 0.3112 | 0.7601 | 0.5886 | 0.6634 | 0.8773 |
| No log | 2.0 | 66 | 0.2539 | 0.7706 | 0.72 | 0.7445 | 0.9038 |
| No log | 3.0 | 99 | 0.2416 | 0.7755 | 0.76 | 0.7677 | 0.9130 |
| No log | 4.0 | 132 | 0.2536 | 0.8190 | 0.7371 | 0.7759 | 0.9165 |
| No log | 5.0 | 165 | 0.2644 | 0.7982 | 0.7457 | 0.7710 | 0.9176 |
| No log | 6.0 | 198 | 0.2735 | 0.8142 | 0.7514 | 0.7816 | 0.9205 |
| No log | 7.0 | 231 | 0.2742 | 0.8072 | 0.7657 | 0.7859 | 0.9217 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
CH0KUN/autotrain-TNC_Domain_WangchanBERTa-921730254 | 717c70412daf75b5670678e02d7abf451f1cf5f5 | 2022-05-28T12:04:53.000Z | [
"pytorch",
"camembert",
"text-classification",
"unk",
"dataset:CH0KUN/autotrain-data-TNC_Domain_WangchanBERTa",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | CH0KUN | null | CH0KUN/autotrain-TNC_Domain_WangchanBERTa-921730254 | 12 | null | transformers | 10,802 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- CH0KUN/autotrain-data-TNC_Domain_WangchanBERTa
co2_eq_emissions: 25.144394918865913
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 921730254
- CO2 Emissions (in grams): 25.144394918865913
## Validation Metrics
- Loss: 0.7080970406532288
- Accuracy: 0.7775925925925926
- Macro F1: 0.7758012615987406
- Micro F1: 0.7775925925925925
- Weighted F1: 0.7758012615987406
- Macro Precision: 0.7833307663368776
- Micro Precision: 0.7775925925925926
- Weighted Precision: 0.7833307663368777
- Macro Recall: 0.7775925925925926
- Micro Recall: 0.7775925925925926
- Weighted Recall: 0.7775925925925926
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/CH0KUN/autotrain-TNC_Domain_WangchanBERTa-921730254
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("CH0KUN/autotrain-TNC_Domain_WangchanBERTa-921730254", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("CH0KUN/autotrain-TNC_Domain_WangchanBERTa-921730254", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
LinaR/t5-base-medium-title-generation | 6f1f242423eb0ed095194d71c230de11df267703 | 2022-05-28T12:27:56.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | LinaR | null | LinaR/t5-base-medium-title-generation | 12 | null | transformers | 10,803 | ---
tags:
- generated_from_keras_callback
model-index:
- name: t5-base-medium-title-generation
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-base-medium-title-generation
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
batya66/bert-finetuned-ner | 591d1f808bd28e4961342fe157adbd611e033f7f | 2022-05-31T12:02:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | batya66 | null | batya66/bert-finetuned-ner | 12 | null | transformers | 10,804 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9287951211471898
- name: Recall
type: recall
value: 0.9483338943116796
- name: F1
type: f1
value: 0.9384628195520027
- name: Accuracy
type: accuracy
value: 0.985915700241361
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0622
- Precision: 0.9288
- Recall: 0.9483
- F1: 0.9385
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0876 | 1.0 | 1756 | 0.0657 | 0.9093 | 0.9349 | 0.9219 | 0.9826 |
| 0.0412 | 2.0 | 3512 | 0.0555 | 0.9357 | 0.9500 | 0.9428 | 0.9867 |
| 0.0205 | 3.0 | 5268 | 0.0622 | 0.9288 | 0.9483 | 0.9385 | 0.9859 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
YeRyeongLee/bert-large-uncased-finetuned-filtered-0602 | f990fdeae3d8a80aa0eaa34792771fa83806cde5 | 2022-06-01T22:57:54.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | YeRyeongLee | null | YeRyeongLee/bert-large-uncased-finetuned-filtered-0602 | 12 | null | transformers | 10,805 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-large-uncased-finetuned-filtered-0602
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-filtered-0602
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8409
- Accuracy: 0.1667
- F1: 0.0476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 1.8331 | 1.0 | 3180 | 1.8054 | 0.1667 | 0.0476 |
| 1.8158 | 2.0 | 6360 | 1.8196 | 0.1667 | 0.0476 |
| 1.8088 | 3.0 | 9540 | 1.8059 | 0.1667 | 0.0476 |
| 1.8072 | 4.0 | 12720 | 1.7996 | 0.1667 | 0.0476 |
| 1.8182 | 5.0 | 15900 | 1.7962 | 0.1667 | 0.0476 |
| 1.7993 | 6.0 | 19080 | 1.8622 | 0.1667 | 0.0476 |
| 1.7963 | 7.0 | 22260 | 1.8378 | 0.1667 | 0.0476 |
| 1.7956 | 8.0 | 25440 | 1.8419 | 0.1667 | 0.0476 |
| 1.7913 | 9.0 | 28620 | 1.8406 | 0.1667 | 0.0476 |
| 1.7948 | 10.0 | 31800 | 1.8409 | 0.1667 | 0.0476 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
JXL884/distilbert-base-uncased-finetuned-emotion | 3484ce4ede20e070fafc972f23d96f7def7975f8 | 2022-06-02T02:14:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | JXL884 | null | JXL884/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,806 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
kktoto/tiny_ktoto_punctuator | 40605dcc630b08a5eb8fa6f6ab9bf5aca134f257 | 2022-06-02T03:54:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | kktoto | null | kktoto/tiny_ktoto_punctuator | 12 | null | transformers | 10,807 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tiny_ktoto_punctuator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny_ktoto_punctuator
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1342
- Precision: 0.6446
- Recall: 0.6184
- F1: 0.6312
- Accuracy: 0.9503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1452 | 1.0 | 5561 | 0.1409 | 0.6289 | 0.5973 | 0.6127 | 0.9481 |
| 0.1389 | 2.0 | 11122 | 0.1358 | 0.6415 | 0.6103 | 0.6255 | 0.9497 |
| 0.1352 | 3.0 | 16683 | 0.1342 | 0.6446 | 0.6184 | 0.6312 | 0.9503 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
PontifexMaximus/opus-mt-iir-en-finetuned-fa-to-en-finetuned-fa-to-en | fd77e8a0f3fd094cb31c4933723ddb25545c9227 | 2022-06-03T10:51:44.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | PontifexMaximus | null | PontifexMaximus/opus-mt-iir-en-finetuned-fa-to-en-finetuned-fa-to-en | 12 | null | transformers | 10,808 | Entry not found |
SimulSt/distilbert-base-uncased-finetuned-emotion | fbcf0c2ed101a6aaf6b00ccce527049f78ccf301 | 2022-06-06T13:24:23.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SimulSt | null | SimulSt/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,809 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9250238763128368
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2202
- Accuracy: 0.925
- F1: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8285 | 1.0 | 250 | 0.3203 | 0.905 | 0.9008 |
| 0.2544 | 2.0 | 500 | 0.2202 | 0.925 | 0.9250 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
santiviquez/bart-base-finetuned-samsum-en | 1021f8c0b1161e0c43c5d346e056b3e63e007725 | 2022-06-27T20:55:10.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:samsum",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | santiviquez | null | santiviquez/bart-base-finetuned-samsum-en | 12 | null | transformers | 10,810 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: bart-base-finetuned-samsum-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 46.8825
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 45.0692
verified: true
- name: ROUGE-2
type: rouge
value: 20.9049
verified: true
- name: ROUGE-L
type: rouge
value: 37.3128
verified: true
- name: ROUGE-LSUM
type: rouge
value: 40.662
verified: true
- name: loss
type: loss
value: 5.763935565948486
verified: true
- name: gen_len
type: gen_len
value: 18.4921
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-samsum-en
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3676
- Rouge1: 46.8825
- Rouge2: 22.0923
- Rougel: 39.7249
- Rougelsum: 42.9187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.5172 | 1.0 | 300 | 2.1613 | 47.4152 | 22.8106 | 39.93 | 43.3639 |
| 0.3627 | 2.0 | 600 | 2.2771 | 47.2676 | 22.6325 | 40.1345 | 43.19 |
| 0.2466 | 3.0 | 900 | 2.3676 | 46.8825 | 22.0923 | 39.7249 | 42.9187 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
kktoto/wwdd_tiny | 6a56d1fc1b7781ed8da1ad77ca38a149ee499143 | 2022-06-04T13:45:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | kktoto | null | kktoto/wwdd_tiny | 12 | null | transformers | 10,811 | Entry not found |
juancavallotti/t5-small-gec | 7bafa50ee7e83248fecedfd4cd3ce3ba004fc2ef | 2022-06-05T01:51:04.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | juancavallotti | null | juancavallotti/t5-small-gec | 12 | null | transformers | 10,812 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-gec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-gec
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
espejelomar/sentece-embeddings-BETO | b93a4d25f37360c1ebdbc1912100d3c1a70d0af4 | 2022-06-05T05:32:59.000Z | [
"pytorch",
"bert",
"feature-extraction",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:code_search_net",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | espejelomar | null | espejelomar/sentece-embeddings-BETO | 12 | null | sentence-transformers | 10,813 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- flax-sentence-embeddings/stackexchange_xml
- code_search_net
---
# espejelomar/sentece-embeddings-BETO
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('espejelomar/sentece-embeddings-BETO')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('espejelomar/sentece-embeddings-BETO')
model = AutoModel.from_pretrained('espejelomar/sentece-embeddings-BETO')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=espejelomar/sentece-embeddings-BETO)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 16 with parameters:
```
{'batch_size': 100}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
gokuls/tiny-bert-sst2-distilled-model | d403d363429d84e29ea984c56119b572e7cab5e0 | 2022-06-06T01:31:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | gokuls | null | gokuls/tiny-bert-sst2-distilled-model | 12 | null | transformers | 10,814 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: tiny-bert-sst2-distilled-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.838302752293578
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-bert-sst2-distilled-model
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2592
- Accuracy: 0.8383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5303 | 1.0 | 4210 | 1.2542 | 0.8222 |
| 0.4503 | 2.0 | 8420 | 1.1260 | 0.8211 |
| 0.3689 | 3.0 | 12630 | 1.2325 | 0.8234 |
| 0.3122 | 4.0 | 16840 | 1.2533 | 0.8337 |
| 0.2764 | 5.0 | 21050 | 1.2726 | 0.8337 |
| 0.254 | 6.0 | 25260 | 1.2609 | 0.8337 |
| 0.2358 | 7.0 | 29470 | 1.2592 | 0.8383 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.10.1+cu113
- Datasets 1.15.1
- Tokenizers 0.12.1
|
PontifexMaximus/mt5-base-parsinlu-opus-translation_fa_en-finetuned-fa-to-en | fb769c174971f0aa96447960b34ee20c7c6abd65 | 2022-06-06T07:26:12.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | PontifexMaximus | null | PontifexMaximus/mt5-base-parsinlu-opus-translation_fa_en-finetuned-fa-to-en | 12 | null | transformers | 10,815 | Entry not found |
inokufu/bert-base-uncased-xnli-sts-finetuned-education | 78279b6df52e606c2024fbbf1f71df24b82f913b | 2022-06-07T16:39:43.000Z | [
"pytorch",
"bert",
"feature-extraction",
"en",
"dataset:xnli",
"dataset:stsb_multi_mt",
"arxiv:1810.04805",
"arxiv:1809.05053",
"sentence-transformers",
"sentence-similarity",
"transformers",
"Education",
"xnli",
"stsb_multi_mt"
]
| sentence-similarity | false | inokufu | null | inokufu/bert-base-uncased-xnli-sts-finetuned-education | 12 | null | sentence-transformers | 10,816 | ---
pipeline_tag: sentence-similarity
language: en
tags:
- sentence-similarity
- transformers
- Education
- en
- bert
- sentence-transformers
- feature-extraction
- xnli
- stsb_multi_mt
datasets:
- xnli
- stsb_multi_mt
---
# inokufu/bertheo-en
A [sentence-transformers](https://www.SBERT.net) model fine-tuned on course sentences. It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Details
This model is based on the English bert-base-uncased pre-trained model [1, 2].
It was first fine-tuned on our learning object (LO) sentences dataset. This dataset consists of a sample of 500k sentences of course descriptions. We used standard parameter settings for fine-tuning as mentioned in the original BERT paper [2]. This allows the model to improve its performance on the target task (Masked Language Model) for domain-specific sentences.
It was then fine-tuned on a natural language inference task (XNLI) [3]. This task consists in training the model to recognize relations between sentences (contradiction, neutral, implication).
It was then fine-tuned on a text semantic similarity task (on STS data) [4]. This task consists in training the model to estimate the similarity between two sentences.
This fine-tuning process allows our model to have a semantic representation of words that is much better than the one proposed by the base model.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Learn to code in python", "Become an expert in accounting"]
model = SentenceTransformer('inokufu/bert-base-uncased-xnli-sts-finetuned-education')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Learn to code in python", "Become an expert in accounting"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('inokufu/bert-base-uncased-xnli-sts-finetuned-education')
model = AutoModel.from_pretrained('inokufu/bert-base-uncased-xnli-sts-finetuned-education')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
STS (en) score: 84.61%
## Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## References
[1] https://huggingface.co/bert-base-uncased <br>
[2] https://arxiv.org/abs/1810.04805 <br>
[3] https://arxiv.org/abs/1809.05053 <br>
[4] https://huggingface.co/datasets/stsb_multi_mt <br>
|
ghadeermobasher/Original-BioBERT-BC5CDR-Disease | 45303d42061def853ab6f40264a20ff7c73ab7da | 2022-06-09T11:08:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-BioBERT-BC5CDR-Disease | 12 | null | transformers | 10,817 | Entry not found |
ghadeermobasher/WLT-BlueBERT-BC5CDR-Chemical | 19c4ba1c52f325b04f1006547dc7992605dbddbb | 2022-06-09T11:51:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/WLT-BlueBERT-BC5CDR-Chemical | 12 | null | transformers | 10,818 | Entry not found |
ghadeermobasher/Original-BioBERT-BC5CDR-Chemical | 935286adcee6936f4934142920bda3ad604f4fca | 2022-06-09T11:52:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-BioBERT-BC5CDR-Chemical | 12 | null | transformers | 10,819 | Entry not found |
ghadeermobasher/Original-BioBERT-BC2GM | d8f3fb28745873691c37822c1bd5b4f6b7fa7363 | 2022-06-09T14:21:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-BioBERT-BC2GM | 12 | null | transformers | 10,820 | Entry not found |
ghadeermobasher/Original-BioBERT-Linnaeus | a9dba32da24e0726f1dfb27895d3c08114132e94 | 2022-06-09T14:58:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-BioBERT-Linnaeus | 12 | null | transformers | 10,821 | Entry not found |
speechbrain/asr-wav2vec2-dvoice-wolof | f6386c7c01da9cc8e483fd37d5dde8c3783adfe1 | 2022-06-10T00:56:54.000Z | [
"wav2vec2",
"feature-extraction",
"wo",
"dataset:Dvoice",
"speechbrain",
"CTC",
"pytorch",
"Transformer",
"license:apache-2.0",
"automatic-speech-recognition"
]
| automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-wav2vec2-dvoice-wolof | 12 | null | speechbrain | 10,822 | ---
language: "wo"
thumbnail:
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- pytorch
- speechbrain
- Transformer
license: "apache-2.0"
datasets:
- Dvoice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# wav2vec 2.0 with CTC/Attention trained on DVoice Wolof (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on a [ALFFA](https://github.com/besacier/ALFFA_PUBLIC) Wolof dataset within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
| DVoice Release | Val. CER | Val. WER | Test CER | Test WER |
|:-------------:|:---------------------------:| -----:| -----:| -----:|
| v2.0 | 4.81 | 16.25 | 4.83 | 16.05 |
# Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and is trained with the train transcriptions.
- Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on the Darija dataset.
The obtained final acoustic representation is given to the CTC greedy decoder.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
# Install SpeechBrain
First of all, please install transformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read the SpeechBrain tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
# Transcribing your own audio files (in Wolof)
```python
from speechbrain.pretrained import EncoderASR
asr_model = EncoderASR.from_hparams(source="speechbrain/asr-wav2vec2-dvoice-wolof", savedir="pretrained_models/asr-wav2vec2-dvoice-wolof")
asr_model.transcribe_file('speechbrain/asr-wav2vec2-dvoice-wolof/example_wolof.wav')
```
# Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
# Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/DVoice/ASR/CTC
python train_with_wav2vec2.py hparams/train_wol_with_wav2vec.yaml --data_folder=/localscratch/ALFFA_PUBLIC/ASR/WOLOF/data/
```
# Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
# About DVoice
DVoice is a community initiative that aims to provide African low resources languages with data and models to facilitate their use of voice technologies. The lack of data on these languages makes it necessary to collect data using methods that are specific to each one. Two different approaches are currently used: the DVoice platforms ([https://dvoice.ma](https://dvoice.ma) and [https://dvoice.sn](https://dvoice.sn)), which are based on Mozilla Common Voice, for collecting authentic recordings from the community, and transfer learning techniques for automatically labeling recordings that are retrieved from social media. The DVoice platform currently manages 7 languages including Darija (Moroccan Arabic dialect) whose dataset appears on this version, Wolof, Mandingo, Serere, Pular, Diola, and Soninke.
For this project, AIOX Labs and the SI2M Laboratory are joining forces to build the future of technologies together.
# About AIOX Labs
Based in Rabat, London, and Paris, AIOX-Labs mobilizes artificial intelligence technologies to meet the business needs and data projects of companies.
- He is at the service of the growth of groups, the optimization of processes, or the improvement of the customer experience.
- AIOX-Labs is multi-sector, from fintech to industry, including retail and consumer goods.
- Business-ready data products with a solid algorithmic base and adaptability for the specific needs of each client.
- A complementary team made up of doctors in AI and business experts with a solid scientific base and international publications.
Website: [https://www.aiox-labs.com/](https://www.aiox-labs.com/)
# SI2M Laboratory
The Information Systems, Intelligent Systems, and Mathematical Modeling Research Laboratory (SI2M) is an academic research laboratory of the National Institute of Statistics and Applied Economics (INSEA). The research areas of the laboratories are Information Systems, Intelligent Systems, Artificial Intelligence, Decision Support, Network, and System Security, and Mathematical Modelling.
Website: [SI2M Laboratory](https://insea.ac.ma/index.php/pole-recherche/equipe-de-recherche/150-laboratoire-de-recherche-en-systemes-d-information-systemes-intelligents-et-modelisation-mathematique)
# About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain
# Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
# Acknowledgements
This research was supported through computational resources of HPC-MARWAN (www.marwan.ma/hpc) provided by CNRST, Rabat, Morocco. We deeply thank this institution. |
Deborah/bertimbau-finetuned-pos-accelerate | f89c65d9ff974c1339b253332d7a5beeed166636 | 2022-06-13T00:14:39.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Deborah | null | Deborah/bertimbau-finetuned-pos-accelerate | 12 | null | transformers | 10,823 | Entry not found |
hckhck/AI_Education | 9ac244207da302af163aa1dac2ae44b1d10c9f96 | 2022-06-13T07:38:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:afl-3.0"
]
| text-generation | false | hckhck | null | hckhck/AI_Education | 12 | null | transformers | 10,824 | ---
license: afl-3.0
---
|
ghadeermobasher/CRAFT-Modified-BlueBERT-512 | 965125cbb5221ec44749e6171b3d35ed59b8652d | 2022-06-14T00:11:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/CRAFT-Modified-BlueBERT-512 | 12 | null | transformers | 10,825 | Entry not found |
ghadeermobasher/BC5CDR-Chem-Modified-BioBERT-512 | 62b0040c60a1869b1989fec5469fe49844e15365 | 2022-06-13T23:09:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chem-Modified-BioBERT-512 | 12 | null | transformers | 10,826 | Entry not found |
ghadeermobasher/BC4CHEMD-Chem-Modified-SciBERT-512 | 659570b8433dbd04312b3aed72f52ff66c88bcd7 | 2022-06-14T09:45:52.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD-Chem-Modified-SciBERT-512 | 12 | null | transformers | 10,827 | Entry not found |
ghadeermobasher/BC4CHEMD-Chem-Original-SciBERT-512 | 62f88eeb010c4cb038bc43fad5272999b0889594 | 2022-06-14T09:49:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD-Chem-Original-SciBERT-512 | 12 | null | transformers | 10,828 | Entry not found |
ghadeermobasher/BC5CDR-Chem-Original-BlueBERT-384 | b8c6614c6c71828dab68cdfa06545541fc33e347 | 2022-06-14T01:43:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chem-Original-BlueBERT-384 | 12 | null | transformers | 10,829 | Entry not found |
corgito/finetuning-sentiment-model-3000-samples | 61eabcd3f5b366aac66fdad2f4ee3d17a6234a5c | 2022-06-15T02:09:05.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | corgito | null | corgito/finetuning-sentiment-model-3000-samples | 12 | null | transformers | 10,830 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8712871287128714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3105
- Accuracy: 0.87
- F1: 0.8713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
ghadeermobasher/BC5CDR-Chem-Modified-BioBERT-384 | dda45ae9e8d9ff94798bfa637d1b6fdc7455ca4d | 2022-06-15T10:48:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chem-Modified-BioBERT-384 | 12 | null | transformers | 10,831 | Entry not found |
ghadeermobasher/BC5CD-Chem-Modified-PubMedBERT-512 | 31767604ce9b1f2db4dd1061145e0e35a7051310 | 2022-06-15T11:22:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CD-Chem-Modified-PubMedBERT-512 | 12 | null | transformers | 10,832 | Entry not found |
AidenWilliams/wav2vec2-xls-r-300m-mt-50 | dd6738a52c4a9389ef58001d2e8ad960e4a07d9b | 2022-07-25T13:03:47.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mt",
"dataset:mozilla-foundation/common_voice_7_0",
"dataset:MASRI-HEADSET-V2",
"transformers",
"generated_from_trainer",
"low-resource",
"model-index"
]
| automatic-speech-recognition | false | AidenWilliams | null | AidenWilliams/wav2vec2-xls-r-300m-mt-50 | 12 | null | transformers | 10,833 | |
microsoft/swinv2-base-patch4-window12to24-192to384-22kto1k-ft | 6692b9ab6094e3fd4d0dc92a32c5e60c3e47d140 | 2022-07-09T06:22:34.000Z | [
"pytorch",
"swinv2",
"transformers"
]
| null | false | microsoft | null | microsoft/swinv2-base-patch4-window12to24-192to384-22kto1k-ft | 12 | null | transformers | 10,834 | Entry not found |
Rajesh222/distilbert-base-uncased-finetuned-emotion | fa7ce29d5078b446840ccaed1c0a72d202a1027a | 2022-06-16T14:05:04.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Rajesh222 | null | Rajesh222/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,835 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9265425929085783
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2133
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8085 | 1.0 | 250 | 0.3033 | 0.9065 | 0.9037 |
| 0.2458 | 2.0 | 500 | 0.2133 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.3.0
- Tokenizers 0.11.6
|
KoichiYasuoka/deberta-large-japanese-aozora-ud-head | e7e4516cb4eec31179c1aaf4f5e3a904c4eb4c7a | 2022-07-23T14:43:58.000Z | [
"pytorch",
"deberta-v2",
"question-answering",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| question-answering | false | KoichiYasuoka | null | KoichiYasuoka/deberta-large-japanese-aozora-ud-head | 12 | null | transformers | 10,836 | ---
language:
- "ja"
tags:
- "japanese"
- "question-answering"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "question-answering"
widget:
- text: "国語"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "教科書"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "の"
context: "全学年にわたって小学校の国語[MASK]教科書に挿し絵が用いられている"
---
# deberta-large-japanese-aozora-ud-head
## Model Description
This is a DeBERTa(V2) model pretrained on 青空文庫 for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [deberta-large-japanese-aozora](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-aozora) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForQuestionAnswering
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-aozora-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-large-japanese-aozora-ud-head")
question="国語"
context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
inputs=tokenizer(question,context,return_tensors="pt",return_offsets_mapping=True)
offsets=inputs.pop("offset_mapping").tolist()[0]
outputs=model(**inputs)
start,end=torch.argmax(outputs.start_logits),torch.argmax(outputs.end_logits)
print(context[offsets[start][0]:offsets[end][-1]])
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.file_utils import hf_bucket_url
c=AutoConfig.from_pretrained(hf_bucket_url(bert,"deprel/config.json"))
d=x(hf_bucket_url(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(hf_bucket_url(bert,"tagger/config.json"))
t=x(hf_bucket_url(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/deberta-large-japanese-aozora-ud-head")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
## Reference
安岡孝一: [青空文庫DeBERTaモデルによる国語研長単位係り受け解析](http://hdl.handle.net/2433/275409), 東洋学へのコンピュータ利用, 第35回研究セミナー (2022年7月), pp.29-43.
|
Hardeep/distilbert-base-uncased-finetuned-emotion | ee1699b1c8bb90e4b2ecdcbe4c8fadde18fb98d6 | 2022-06-19T03:39:41.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Hardeep | null | Hardeep/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,837 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9222308123735177
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2118
- Accuracy: 0.9225
- F1: 0.9222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7992 | 1.0 | 250 | 0.3046 | 0.9085 | 0.9063 |
| 0.2352 | 2.0 | 500 | 0.2118 | 0.9225 | 0.9222 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Danastos/dpr_passage_el_1 | 7973271412ffa4c96b3d1b3388025cf00042835b | 2022-06-19T20:46:37.000Z | [
"pytorch",
"bert",
"pretraining",
"transformers"
]
| null | false | Danastos | null | Danastos/dpr_passage_el_1 | 12 | null | transformers | 10,838 | Entry not found |
swardiantara/distilbert-base-cased-finetuned-ner | e98f4c2f19a2b840bac3e1b7afa1bc42c07f9056 | 2022-07-14T08:07:57.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | swardiantara | null | swardiantara/distilbert-base-cased-finetuned-ner | 12 | null | transformers | 10,839 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-cased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.916955017301038
- name: Recall
type: recall
value: 0.9272384712004307
- name: F1
type: f1
value: 0.9220680733371994
- name: Accuracy
type: accuracy
value: 0.9804409254135515
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0709
- Precision: 0.9170
- Recall: 0.9272
- F1: 0.9221
- Accuracy: 0.9804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2732 | 1.0 | 878 | 0.0916 | 0.8931 | 0.8961 | 0.8946 | 0.9736 |
| 0.0717 | 2.0 | 1756 | 0.0726 | 0.9166 | 0.9212 | 0.9189 | 0.9794 |
| 0.0364 | 3.0 | 2634 | 0.0709 | 0.9170 | 0.9272 | 0.9221 | 0.9804 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.0.0
- Tokenizers 0.12.1
|
Jeevesh8/std_0pnt2_bert_ft_cola-48 | 5e41e259e00ca08384a1fbce176411e2cfe7cd9c | 2022-06-21T13:33:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-48 | 12 | null | transformers | 10,840 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-55 | 8c705a570df8410562e29295c1bdc14f2f64ffe2 | 2022-06-21T13:30:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-55 | 12 | null | transformers | 10,841 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-52 | 0eacc37454ff9511fd6ab270e15b30708fcb523d | 2022-06-21T13:28:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-52 | 12 | null | transformers | 10,842 | Entry not found |
CobaltAlchemist/Toxicbot | bd43ecdb81b2135e661affe8d40beac8d573f01e | 2022-06-24T06:59:40.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:gpl-3.0"
]
| text-classification | false | CobaltAlchemist | null | CobaltAlchemist/Toxicbot | 12 | null | transformers | 10,843 | ---
license: gpl-3.0
widget:
- text: "I like you. </s></s> I love you."
---
|
javind/pegasus-xsum-ytubenewssum | 430ba226cff539aacf37874708b46ab4c16affa8 | 2022-07-02T09:08:23.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"license:unlicense",
"autotrain_compatible"
]
| text2text-generation | false | javind | null | javind/pegasus-xsum-ytubenewssum | 12 | null | transformers | 10,844 | ---
license: unlicense
---
|
smangrul/Chat-E | 1f28f1516b7d8938247cf9b68cb3ba4118c677dd | 2022-06-26T09:40:58.000Z | [
"pytorch",
"blenderbot",
"text2text-generation",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible"
]
| text2text-generation | false | smangrul | null | smangrul/Chat-E | 12 | null | transformers | 10,845 | ---
license: cc-by-nc-4.0
---
|
tsantosh7/Bailii-Roberta | 17b595455cced13c3760d3f844fdb66b7ddeb71c | 2022-06-26T15:09:54.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"arxiv:1907.11692",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | tsantosh7 | null | tsantosh7/Bailii-Roberta | 12 | null | transformers | 10,846 | ---
license: apache-2.0
tags:
- fill-mask
language:
- en
widget:
- text: "He carefully assessed the financial position of the <mask> disclosed within its accounts, including its pension scheme liabilities."
- text: "Moreover, she had chosen not to give <mask> and therefore had not provided any innocent explanation of her communications."
---
# Pre-trained Language Model for England and Wales Court of Appeal (Criminal Division) Decisions
## Introduction
The research for understanding the bias in criminal court decisions need the support of natural language processing tools.
The pre-trained language model has greatly improved the accuracy of text mining in general texts. At present, there is an urgent need for a pre-trained language model specifically for the automatic processing of court decision texts.
We used the text from the [Bailii website](https://www.bailii.org/ew/cases/EWCA/Crim/) as the training set. Based on the deep language model framework of RoBERTa, we constructed bailii-roberta pre-training language model by [transformers/run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py) and [transformers/mlm_wwm](https://github.com/huggingface/transformers/tree/main/examples/research_projects/mlm_wwm).
## How to use
### Huggingface Transformers
The `from_pretrained` method based on [Huggingface Transformers](https://github.com/huggingface/transformers) can directly obtain bailii-roberta model online.
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("tsantosh7/bailii-roberta")
model = AutoModel.from_pretrained("tsantosh7/bailii-roberta")
```
### Download Models
- The version of the model we provide is `PyTorch`.
### From Huggingface
- Download directly through Huggingface's official website.
- [tsantosh7/bailii-roberta](https://huggingface.co/tsantosh7/Bailii-Roberta/)
## Disclaimer
- The experimental results presented in the report only show the performance under a specific data set and hyperparameter combination, and cannot represent the essence of each model. The experimental results may change due to the random number of seeds and computing equipment.
- **Users can use the model arbitrarily within the scope of the license, but we are not responsible for the direct or indirect losses caused by using the content of the project.**
## Acknowledgment
- bailii-roberta was trained based on [roberta-base](https://arxiv.org/abs/1907.11692)). |
Leo2001/ArmSpellChecker | 4c3a663ae302507712704e5c315832f1d523cdad | 2022-06-29T07:54:19.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | Leo2001 | null | Leo2001/ArmSpellChecker | 12 | null | transformers | 10,847 | ---
license: mit
---
|
Salvatore/bert-finetuned-mutation-recognition-3 | 66e4d886aa9a8baf5e06967bce7c613ae7f95f15 | 2022-06-29T14:51:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Salvatore | null | Salvatore/bert-finetuned-mutation-recognition-3 | 12 | null | transformers | 10,848 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-mutation-recognition-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mutation-recognition-3
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0727
- Dnamutation F1: 0.6484
- Proteinmutation F1: 0.8571
- Snp F1: 1.0
- Precision: 0.7966
- Recall: 0.7625
- F1: 0.7792
- Accuracy: 0.9872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Dnamutation F1 | Proteinmutation F1 | Snp F1 | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:------------------:|:------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 324 | 0.0323 | 0.5996 | 0.7886 | 1.0 | 0.6583 | 0.7982 | 0.7215 | 0.9901 |
| 0.0788 | 2.0 | 648 | 0.0314 | 0.6765 | 0.8783 | 1.0 | 0.7453 | 0.8571 | 0.7973 | 0.9907 |
| 0.0788 | 3.0 | 972 | 0.0306 | 0.6391 | 0.8679 | 1.0 | 0.7341 | 0.8232 | 0.7761 | 0.9903 |
| 0.0273 | 4.0 | 1296 | 0.0424 | 0.6360 | 0.8714 | 1.0 | 0.7792 | 0.775 | 0.7771 | 0.9885 |
| 0.0178 | 5.0 | 1620 | 0.0462 | 0.5885 | 0.8683 | 1.0 | 0.7576 | 0.7589 | 0.7583 | 0.9869 |
| 0.0178 | 6.0 | 1944 | 0.0531 | 0.6176 | 0.8701 | 1.0 | 0.7734 | 0.7679 | 0.7706 | 0.9873 |
| 0.0165 | 7.0 | 2268 | 0.0573 | 0.6597 | 0.8658 | 1.0 | 0.8022 | 0.775 | 0.7884 | 0.9881 |
| 0.0144 | 8.0 | 2592 | 0.0636 | 0.6596 | 0.8454 | 1.0 | 0.7919 | 0.7679 | 0.7797 | 0.9871 |
| 0.0144 | 9.0 | 2916 | 0.0710 | 0.6568 | 0.8748 | 1.0 | 0.8159 | 0.7679 | 0.7912 | 0.9872 |
| 0.0108 | 10.0 | 3240 | 0.0727 | 0.6484 | 0.8571 | 1.0 | 0.7966 | 0.7625 | 0.7792 | 0.9872 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 2.0.0
- Tokenizers 0.12.1
|
Salvatore/bert-finetuned-mutation-recognition-4 | 118eed2bb3500abe2016927c0f205060a9aad884 | 2022-06-29T15:20:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Salvatore | null | Salvatore/bert-finetuned-mutation-recognition-4 | 12 | null | transformers | 10,849 | Entry not found |
sanchit-gandhi/wav2vec2-2-bart-large-tedlium | 6652985c97ae5f582b86a3b8887a6e8672795845 | 2022-07-04T12:42:08.000Z | [
"pytorch",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"en",
"dataset:LIUM/tedlium",
"transformers",
"license:cc-by-4.0"
]
| automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-bart-large-tedlium | 12 | 1 | transformers | 10,850 | ---
language:
- en
tags:
- automatic-speech-recognition
datasets:
- LIUM/tedlium
license: cc-by-4.0
metrics:
- name: Dev WER
type: wer
value: 9.0
- name: Test WER
type: wer
value: 6.4
---
## Wav2Vec2-2-Bart-Large-Tedlium
This model is a sequence-2-sequence (seq2seq) model trained on the [TEDLIUM](https://huggingface.co/datasets/LIUM/tedlium) corpus (release 3).
It combines a speech encoder with a text decoder to perform automatic speech recognition. The encoder weights are initialised with the [Wav2Vec2 LV-60k](https://huggingface.co/facebook/wav2vec2-large-lv60) checkpoint from [@facebook](https://huggingface.co/facebook). The decoder weights are initialised with the [Bart large](https://huggingface.co/facebook/bart-large) checkpoint from [@facebook](https://huggingface.co/facebook).
When using the model, make sure that your speech input is sampled at 16Khz.
The model achieves a word error rate (WER) of 9.0% on the dev set and 6.4% on the test set. [Training logs](https://wandb.ai/sanchit-gandhi/tedlium/runs/1w6frnel?workspace=user-sanchit-gandhi) document the training and evaluation progress over 50k steps of fine-tuning.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import AutoProcessor, SpeechEncoderDecoderModel
from datasets import load_dataset
import torch
# load model and processor
processor = AutoProcessor.from_pretrained("sanchit-gandhi/wav2vec2-2-bart-large-tedlium")
model = SpeechEncoderDecoderModel.from_pretrained("sanchit-gandhi/wav2vec2-2-bart-large-tedlium")
# load dummy dataset
ds = load_dataset("sanchit-gandhi/tedlium_dummy", split="validation")
# process audio inputs
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# run inference (greedy search)
generated = model.generate(input_values)
# decode
decoded = processor.batch_decode(generated, skip_special_tokens=True)
print("Target: ", ds["text"][0])
print("Transcription: ", decoded[0])
```
## Evaluation
This code snippet shows how to evaluate **Wav2Vec2-Large-Tedlium** on the TEDLIUM test data.
```python
from datasets import load_dataset
from transformers import AutoProcessor, SpeechEncoderDecoderModel
import torch
from jiwer import wer
tedlium_eval = load_dataset("LIUM/tedlium", "release3", split="test")
def filter_ds(text):
return text != "ignore_time_segment_in_scoring"
# remove samples ignored from scoring
tedlium_eval = tedlium_eval.map(filter_ds, input_columns=["text"])
model = SpeechEncoderDecoderModel.from_pretrained("sanchit-gandhi/wav2vec2-2-bart-large-tedlium").to("cuda")
processor = AutoProcessor.from_pretrained("sanchit-gandhi/wav2vec2-2-bart-large-tedlium")
gen_kwargs = {
"max_length": 200,
"num_beams": 5,
"length_penalty": 1.2
}
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
generated = model.generate(input_values.to("cuda"), **gen_kwargs)
decoded = processor.batch_decode(generated, skip_special_tokens=True)
batch["transcription"] = decoded[0]
return batch
result = tedlium_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
``` |
Jeevesh8/goog_bert_ft_cola-16 | 75850edf9684a47324724b786027dc419fb4d94e | 2022-06-29T17:33:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-16 | 12 | null | transformers | 10,851 | Entry not found |
Jeevesh8/goog_bert_ft_cola-24 | 9775d864adc37889b2c8ed65ecad915cce7152a9 | 2022-06-29T17:33:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-24 | 12 | null | transformers | 10,852 | Entry not found |
javind/bart-large-cnn-ytubenewssum | ecd4dfef91c4b168e1d1550761efd17e682a06c8 | 2022-07-02T09:12:22.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"license:unlicense",
"autotrain_compatible"
]
| text2text-generation | false | javind | null | javind/bart-large-cnn-ytubenewssum | 12 | null | transformers | 10,853 | ---
license: unlicense
---
|
javind/t5-base-ytubenewssum | 0f3324f269c47da14c82afcecd908ea8a81ab415 | 2022-07-02T09:15:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:unlicense",
"autotrain_compatible"
]
| text2text-generation | false | javind | null | javind/t5-base-ytubenewssum | 12 | null | transformers | 10,854 | ---
license: unlicense
---
|
tau/spider-trivia-question-encoder | 5ea2279cb019154681c9a530e2b7c54d4953dc5e | 2022-07-04T06:59:40.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"transformers"
]
| feature-extraction | false | tau | null | tau/spider-trivia-question-encoder | 12 | null | transformers | 10,855 | Entry not found |
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-256-40 | 3fc44537ddd313d7e8835eb6549c358743a4febc | 2022-07-05T12:13:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-256-40 | 12 | null | transformers | 10,856 | Entry not found |
wiselinjayajos/t5-end2end-questions-generation-cv-squadV2 | 1d6a4e797d6949a6376d3c090dd2f247e63c850b | 2022-07-06T17:20:59.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | wiselinjayajos | null | wiselinjayajos/t5-end2end-questions-generation-cv-squadV2 | 12 | null | transformers | 10,857 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-end2end-questions-generation-cv-squadV2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation-cv-squadV2
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6703 | 2.17 | 100 | 1.9685 |
| 1.9718 | 4.34 | 200 | 1.8541 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sports-ru/antihate | f5e50402d8aeb623a530d3a0c9841cc16bbb8873 | 2022-07-06T12:31:37.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | sports-ru | null | sports-ru/antihate | 12 | null | transformers | 10,858 | Entry not found |
paola-md/recipe-roberta-is | 59d378ee5d5118aa7cb5cba65023b08f11b874a1 | 2022-07-07T11:53:27.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | paola-md | null | paola-md/recipe-roberta-is | 12 | null | transformers | 10,859 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: recipe-roberta-is
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-roberta-is
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.334 | 1.0 | 961 | 1.1217 |
| 1.1638 | 2.0 | 1922 | 1.0369 |
| 1.0936 | 3.0 | 2883 | 0.9922 |
| 1.0503 | 4.0 | 3844 | 0.9606 |
| 1.0188 | 5.0 | 4805 | 0.9314 |
| 0.9953 | 6.0 | 5766 | 0.9256 |
| 0.9769 | 7.0 | 6727 | 0.9109 |
| 0.9599 | 8.0 | 7688 | 0.8978 |
| 0.9461 | 9.0 | 8649 | 0.8813 |
| 0.9377 | 10.0 | 9610 | 0.8777 |
| 0.9253 | 11.0 | 10571 | 0.8755 |
| 0.918 | 12.0 | 11532 | 0.8601 |
| 0.9112 | 13.0 | 12493 | 0.8541 |
| 0.9043 | 14.0 | 13454 | 0.8548 |
| 0.8984 | 15.0 | 14415 | 0.8470 |
| 0.8958 | 16.0 | 15376 | 0.8412 |
| 0.8914 | 17.0 | 16337 | 0.8345 |
| 0.8882 | 18.0 | 17298 | 0.8353 |
| 0.8871 | 19.0 | 18259 | 0.8344 |
| 0.8839 | 20.0 | 19220 | 0.8382 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
tner/twitter-roberta-base-dec2020-tweetner-2020 | c3db6c21415a3089141a9615ec47a235947767a9 | 2022-07-07T10:09:45.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | tner | null | tner/twitter-roberta-base-dec2020-tweetner-2020 | 12 | null | transformers | 10,860 | Entry not found |
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-512-5-30 | f67ccd04a96b383c764dffdf58653d1859cad5f5 | 2022-07-07T14:22:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-512-5-30 | 12 | null | transformers | 10,861 | Entry not found |
OFA-Sys/OFA-huge | e8ba0324416869ef9a5ef70c85224ecf8a68e237 | 2022-07-25T11:49:54.000Z | [
"pytorch",
"ofa",
"transformers",
"license:apache-2.0"
]
| null | false | OFA-Sys | null | OFA-Sys/OFA-huge | 12 | 1 | transformers | 10,862 | ---
license: apache-2.0
---
# OFA-huge
This is the **huge** version of OFA pretrained model. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework.
The directory includes 4 files, namely `config.json` which consists of model configuration, `vocab.json` and `merge.txt` for our OFA tokenizer, and lastly `pytorch_model.bin` which consists of model weights. There is no need to worry about the mismatch between Fairseq and transformers, since we have addressed the issue yet.
To use it in transformers, please refer to https://github.com/OFA-Sys/OFA/tree/feature/add_transformers. Install the transformers and download the models as shown below.
```
git clone --single-branch --branch feature/add_transformers https://github.com/OFA-Sys/OFA.git
pip install OFA/transformers/
git clone https://huggingface.co/OFA-Sys/OFA-huge
```
After, refer the path to OFA-huge to `ckpt_dir`, and prepare an image for the testing example below. Also, ensure that you have pillow and torchvision in your environment.
```
>>> from PIL import Image
>>> from torchvision import transforms
>>> from transformers import OFATokenizer, OFAModel
>>> from generate import sequence_generator
>>> mean, std = [0.5, 0.5, 0.5], [0.5, 0.5, 0.5]
>>> resolution = 480
>>> patch_resize_transform = transforms.Compose([
lambda image: image.convert("RGB"),
transforms.Resize((resolution, resolution), interpolation=Image.BICUBIC),
transforms.ToTensor(),
transforms.Normalize(mean=mean, std=std)
])
>>> tokenizer = OFATokenizer.from_pretrained(ckpt_dir)
>>> txt = " what does the image describe?"
>>> inputs = tokenizer([txt], return_tensors="pt").input_ids
>>> img = Image.open(path_to_image)
>>> patch_img = patch_resize_transform(img).unsqueeze(0)
>>> # using the generator of fairseq version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=True)
>>> generator = sequence_generator.SequenceGenerator(
tokenizer=tokenizer,
beam_size=5,
max_len_b=16,
min_len=0,
no_repeat_ngram_size=3,
)
>>> data = {}
>>> data["net_input"] = {"input_ids": inputs, 'patch_images': patch_img, 'patch_masks':torch.tensor([True])}
>>> gen_output = generator.generate([model], data)
>>> gen = [gen_output[i][0]["tokens"] for i in range(len(gen_output))]
>>> # using the generator of huggingface version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=False)
>>> gen = model.generate(inputs, patch_images=patch_img, num_beams=5, no_repeat_ngram_size=3)
>>> print(tokenizer.batch_decode(gen, skip_special_tokens=True))
```
|
jonatasgrosman/exp_w2v2t_zh-cn_r-wav2vec2_s237 | 61b904d36563e597d19a93d1e9f4704f066a0273 | 2022-07-10T02:50:53.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"zh-CN",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
]
| automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_zh-cn_r-wav2vec2_s237 | 12 | null | transformers | 10,863 | ---
language:
- zh-CN
license: apache-2.0
tags:
- automatic-speech-recognition
- zh-CN
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_zh-cn_r-wav2vec2_s237
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jorge-henao/gpt2-small-spanish-historias-conflicto-colpoetry-historias-conflicto-col | 47e513280062863ef0cf3ef9c371cf938cf519a2 | 2022-07-11T16:43:58.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-generation | false | jorge-henao | null | jorge-henao/gpt2-small-spanish-historias-conflicto-colpoetry-historias-conflicto-col | 12 | null | transformers | 10,864 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-small-spanish-historias-conflicto-colpoetry-historias-conflicto-col
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-small-spanish-historias-conflicto-colpoetry-historias-conflicto-col
This model is a fine-tuned version of [jorge-henao/gpt2-small-spanish-historias-conflicto-col](https://huggingface.co/jorge-henao/gpt2-small-spanish-historias-conflicto-col) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
MichalRoztocki/finetuning-sentiment-model-3000-samples | 1cdd2e2c66172477d79926696228845030261348 | 2022-07-12T19:48:42.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | MichalRoztocki | null | MichalRoztocki/finetuning-sentiment-model-3000-samples | 12 | null | transformers | 10,865 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.877887788778878
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3085
- Accuracy: 0.8767
- F1: 0.8779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Hamzaaa/wav2vec2-base-finetuned-emodb | 8b8db9bc6901c6d9516996490b4aba11112631a5 | 2022-07-13T18:03:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
]
| audio-classification | false | Hamzaaa | null | Hamzaaa/wav2vec2-base-finetuned-emodb | 12 | null | transformers | 10,866 | Entry not found |
ghadeermobasher/Modified-biobertv1-BioRED-Chem-128-32-30 | 6e7b108c324fe2a1da7eda37f354a5e40bee83cf | 2022-07-13T14:10:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Modified-biobertv1-BioRED-Chem-128-32-30 | 12 | null | transformers | 10,867 | Entry not found |
shivaniNK8/mt5-small-finetuned-amazon-en-es | f1d113002a53bf0d1407b83a22a60005450f929c | 2022-07-14T06:39:22.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | shivaniNK8 | null | shivaniNK8/mt5-small-finetuned-amazon-en-es | 12 | null | transformers | 10,868 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 22.6804
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4413
- Rouge1: 22.6804
- Rouge2: 8.3299
- Rougel: 17.9992
- Rougelsum: 20.7342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 7.77 | 1.0 | 240 | 2.7230 | 17.25 | 5.629 | 14.0381 | 15.8959 |
| 3.7586 | 2.0 | 480 | 2.5949 | 19.4577 | 6.9354 | 15.772 | 17.8773 |
| 3.4314 | 3.0 | 720 | 2.5355 | 20.0511 | 7.6417 | 16.0889 | 18.4551 |
| 3.2892 | 4.0 | 960 | 2.4845 | 20.3951 | 7.88 | 16.601 | 19.0048 |
| 3.1954 | 5.0 | 1200 | 2.4612 | 20.1806 | 7.2656 | 16.2658 | 18.6222 |
| 3.1128 | 6.0 | 1440 | 2.4544 | 22.5647 | 8.0899 | 17.8057 | 20.487 |
| 3.103 | 7.0 | 1680 | 2.4498 | 22.7048 | 8.384 | 17.978 | 20.6871 |
| 3.0708 | 8.0 | 1920 | 2.4413 | 22.6804 | 8.3299 | 17.9992 | 20.7342 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
helena-balabin/qt-xlm-r-en-nl-mini | c3a8f109e208797cab4c8f500c6fb95ec870db34 | 2022-07-14T07:33:27.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | helena-balabin | null | helena-balabin/qt-xlm-r-en-nl-mini | 12 | null | transformers | 10,869 | Entry not found |
Ngit/clip-rsicd | 631c7a8536ba4c6c8a65cd6f0295256fae5e5db6 | 2022-07-14T18:52:25.000Z | [
"pytorch",
"jax",
"clip",
"feature-extraction",
"transformers"
]
| feature-extraction | false | Ngit | null | Ngit/clip-rsicd | 12 | null | transformers | 10,870 | Entry not found |
nateraw/resnet18-random | a47034ff419d0d042c72ac5eb44e1f7c71cc04bd | 2022-07-14T20:46:45.000Z | [
"pytorch",
"timm",
"image-classification"
]
| image-classification | false | nateraw | null | nateraw/resnet18-random | 12 | null | timm | 10,871 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for resnet18-random |
Amir-UL/JimBot | f390fc2b50aad0e6a74803a11cfa6d89dcbf9690 | 2022-07-15T11:15:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Amir-UL | null | Amir-UL/JimBot | 12 | null | transformers | 10,872 | ---
tags:
- conversational
---
# Jim from The Office |
yongjian/wav2vec2-large-a | 5312b0749f31f41a396daaf1de4d9a3e5d65243a | 2022-07-16T02:43:10.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:LIUM/tedlium",
"transformers",
"speech",
"audio"
]
| automatic-speech-recognition | false | yongjian | null | yongjian/wav2vec2-large-a | 12 | null | transformers | 10,873 | ---
language: en
datasets:
- LIUM/tedlium
tags:
- speech
- audio
- automatic-speech-recognition
---
Finetuned from [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self).
# Installation
1. PyTorch installation: https://pytorch.org/
2. Install transformers: https://huggingface.co/docs/transformers/installation
e.g., installation by conda
```
>> conda create -n wav2vec2 python=3.8
>> conda install pytorch cudatoolkit=11.3 -c pytorch
>> conda install -c conda-forge transformers
```
# Usage
```python
# Load the model and processor
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import numpy as np
import torch
model = Wav2Vec2ForCTC.from_pretrained(r'yongjian/wav2vec2-large-a')
processor = Wav2Vec2Processor.from_pretrained(r'yongjian/wav2vec2-large-a')
# Load input
np_wav = np.random.normal(size=(16000)).clip(-1, 1) # change it to your sample
# Inference
sample_rate = processor.feature_extractor.sampling_rate
with torch.no_grad():
model_inputs = processor(np_wav, sampling_rate=sample_rate, return_tensors="pt", padding=True)
logits = model(model_inputs.input_values, attention_mask=model_inputs.attention_mask).logits # use .cuda() for GPU acceleration
pred_ids = torch.argmax(logits, dim=-1).cpu()
pred_text = processor.batch_decode(pred_ids)
print('Transcription:', pred_text)
``` |
KoichiYasuoka/roberta-base-thai-spm-ud-head | 75f4eb52eb62aae2c9cdc464fefc58d7d721378f | 2022-07-20T03:52:20.000Z | [
"pytorch",
"roberta",
"question-answering",
"th",
"dataset:universal_dependencies",
"transformers",
"thai",
"dependency-parsing",
"license:apache-2.0",
"autotrain_compatible"
]
| question-answering | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-thai-spm-ud-head | 12 | null | transformers | 10,874 | ---
language:
- "th"
tags:
- "thai"
- "question-answering"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "question-answering"
widget:
- text: "กว่า"
context: "หลายหัวดีกว่าหัวเดียว"
- text: "หลาย"
context: "หลายหัวดีกว่าหัวเดียว"
- text: "หัว"
context: "หลาย[MASK]ดีกว่าหัวเดียว"
---
# roberta-base-thai-spm-ud-head
## Model Description
This is a DeBERTa(V2) model pretrained on Thai Wikipedia texts for dependency-parsing (head-detection on Universal Dependencies) as question-answering, derived from [roberta-base-thai-spm](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForQuestionAnswering
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-ud-head")
question="กว่า"
context="หลายหัวดีกว่าหัวเดียว"
inputs=tokenizer(question,context,return_tensors="pt",return_offsets_mapping=True)
offsets=inputs.pop("offset_mapping").tolist()[0]
outputs=model(**inputs)
start,end=torch.argmax(outputs.start_logits),torch.argmax(outputs.end_logits)
print(context[offsets[start][0]:offsets[end][-1]])
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.file_utils import hf_bucket_url
c=AutoConfig.from_pretrained(hf_bucket_url(bert,"deprel/config.json"))
d=x(hf_bucket_url(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(hf_bucket_url(bert,"tagger/config.json"))
t=x(hf_bucket_url(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/roberta-base-thai-spm-ud-head")
print(nlp("หลายหัวดีกว่าหัวเดียว"))
```
|
pardeep/distilbert-base-uncased-finetuned-emotion-ch02 | 823d89023d9fd5ab5030a1c661b449659833ae1b | 2022-07-17T10:54:29.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | pardeep | null | pardeep/distilbert-base-uncased-finetuned-emotion-ch02 | 12 | null | transformers | 10,875 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion-ch02
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.934
- name: F1
type: f1
value: 0.9341801255709286
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-ch02
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1703
- Accuracy: 0.934
- F1: 0.9342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2923 | 1.0 | 250 | 0.2001 | 0.9275 | 0.9263 |
| 0.1485 | 2.0 | 500 | 0.1703 | 0.934 | 0.9342 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ardauzunoglu/BERT-SGM | 570da99e5846e69dc6f1fd589918307a42a75c96 | 2022-07-17T17:27:22.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
]
| sentence-similarity | false | ardauzunoglu | null | ardauzunoglu/BERT-SGM | 12 | null | sentence-transformers | 10,876 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# ardauzunoglu/BERT-SGM
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ardauzunoglu/BERT-SGM')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ardauzunoglu/BERT-SGM)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 441 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 100,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
yixi/bert-finetuned-ner | c6c3a6a51124e0e13bc68405723b935d6dbc2364 | 2022-07-18T13:42:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | yixi | null | yixi/bert-finetuned-ner | 12 | null | transformers | 10,877 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.934260639178672
- name: Recall
type: recall
value: 0.9495119488387749
- name: F1
type: f1
value: 0.9418245555462816
- name: Accuracy
type: accuracy
value: 0.9868281627126626
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0573
- Precision: 0.9343
- Recall: 0.9495
- F1: 0.9418
- Accuracy: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0854 | 1.0 | 1756 | 0.0639 | 0.9148 | 0.9329 | 0.9238 | 0.9822 |
| 0.0403 | 2.0 | 3512 | 0.0542 | 0.9370 | 0.9512 | 0.9440 | 0.9866 |
| 0.0204 | 3.0 | 5268 | 0.0573 | 0.9343 | 0.9495 | 0.9418 | 0.9868 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
shivarama23/DiT_image_quality | 812bcba4f7823a82712ace84e08b3be54f6c9e21 | 2022-07-19T04:57:58.000Z | [
"pytorch",
"beit",
"image-classification",
"transformers"
]
| image-classification | false | shivarama23 | null | shivarama23/DiT_image_quality | 12 | null | transformers | 10,878 | Entry not found |
juancopi81/distilbert-base-uncased-finetuned-squad-d5716d28 | 1fe3a71a8d751bb22ddfd9cf049b779181e96cd5 | 2022-07-19T14:15:17.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"en",
"dataset:squad",
"arxiv:1910.01108",
"transformers",
"question-answering",
"license:apache-2.0",
"autotrain_compatible"
]
| question-answering | false | juancopi81 | null | juancopi81/distilbert-base-uncased-finetuned-squad-d5716d28 | 12 | null | transformers | 10,879 | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
gemasphi/laprador_pt_pb | 32b5b7f8cad05f0d374fe69ba4882f6de31aa88c | 2022-07-19T17:23:19.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | gemasphi | null | gemasphi/laprador_pt_pb | 12 | null | sentence-transformers | 10,880 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# gemasphi/laprador_pt
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('gemasphi/laprador_pt')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('gemasphi/laprador_pt')
model = AutoModel.from_pretrained('gemasphi/laprador_pt')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=gemasphi/laprador_pt)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
leokai/distilbert-base-uncased-finetuned-wikiandmark_epoch20 | 452c427ed75d2d3196305d1399487cf8e1210671 | 2022-07-20T07:33:19.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | leokai | null | leokai/distilbert-base-uncased-finetuned-wikiandmark_epoch20 | 12 | null | transformers | 10,881 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-wikiandmark_epoch20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-wikiandmark_epoch20
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0561
- Accuracy: 0.9944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0224 | 1.0 | 1859 | 0.0277 | 0.9919 |
| 0.0103 | 2.0 | 3718 | 0.0298 | 0.9925 |
| 0.0047 | 3.0 | 5577 | 0.0429 | 0.9924 |
| 0.0038 | 4.0 | 7436 | 0.0569 | 0.9922 |
| 0.0019 | 5.0 | 9295 | 0.0554 | 0.9936 |
| 0.0028 | 6.0 | 11154 | 0.0575 | 0.9928 |
| 0.002 | 7.0 | 13013 | 0.0544 | 0.9926 |
| 0.0017 | 8.0 | 14872 | 0.0553 | 0.9935 |
| 0.001 | 9.0 | 16731 | 0.0498 | 0.9924 |
| 0.0001 | 10.0 | 18590 | 0.0398 | 0.9934 |
| 0.0 | 11.0 | 20449 | 0.0617 | 0.9935 |
| 0.0002 | 12.0 | 22308 | 0.0561 | 0.9944 |
| 0.0002 | 13.0 | 24167 | 0.0755 | 0.9934 |
| 0.0 | 14.0 | 26026 | 0.0592 | 0.9941 |
| 0.0 | 15.0 | 27885 | 0.0572 | 0.9939 |
| 0.0 | 16.0 | 29744 | 0.0563 | 0.9941 |
| 0.0 | 17.0 | 31603 | 0.0587 | 0.9936 |
| 0.0005 | 18.0 | 33462 | 0.0673 | 0.9937 |
| 0.0 | 19.0 | 35321 | 0.0651 | 0.9933 |
| 0.0 | 20.0 | 37180 | 0.0683 | 0.9936 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Dizzykong/large-commands | 01e086b04dae5fce469fc30bfa873a33edad30a8 | 2022-07-21T04:20:09.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-generation | false | Dizzykong | null | Dizzykong/large-commands | 12 | null | transformers | 10,882 | ---
tags:
- generated_from_trainer
model-index:
- name: large-commands
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# large-commands
This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
buvata/bertTitle | 2ef225cc466d34aa8e795cc5f8ad255653fdba07 | 2022-07-21T02:16:41.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | buvata | null | buvata/bertTitle | 12 | null | transformers | 10,883 | Entry not found |
jinwooChoi/SKKU_SA_KES | 843148b90713d68909a3375a689d53763d02d19b | 2022-07-22T05:18:16.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_SA_KES | 12 | null | transformers | 10,884 | Entry not found |
mtreviso/ct5-small-en-wiki | 7400ba7a47220a7c9949d7d62190eb3b676ab186 | 2022-07-25T13:19:21.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"en",
"dataset:wikipedia",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
]
| text2text-generation | false | mtreviso | null | mtreviso/ct5-small-en-wiki | 12 | null | transformers | 10,885 | ---
license: afl-3.0
language: en
tags:
- t5
datasets:
- wikipedia
---
# chunked T5 - small (cT5-small)
Github: https://github.com/mtreviso/chunked-t5
A T5 model that uses a new loss where a special end-of-chunk token `</c>` is appended after sentinel tokens.
The decoder has to predict the full input with masked tokens followed by `</c>`.
This allows a much faster auto-regressive generation since the decoder can predict multiple tokens in parallel.
For example, for the input `the quick brown fox jumps over the lazy dog`:
```
encoder: the <extra_id_0> fox jumps <extra_id_1> the lazy dog
T5 decoder : <extra_id_0> quick brown <extra_id_1> over <extra_id_2>
cT5 decoder: <extra_id_0> quick brown </c> <extra_id_1> over </c> <extra_id_2>
```
The generation may look like this for T5 and cT5:
```
T5: <extra_id_0>
T5: <extra_id_0> quick
T5: <extra_id_0> quick brown
T5: <extra_id_0> quick brown <extra_id_1>
T5: <extra_id_0> quick brown <extra_id_1> over
T5: <extra_id_0> quick brown <extra_id_1> over <extra_id_2>
T5: <extra_id_0> quick brown <extra_id_1> over <extra_id_2> </s>
cT5: <extra_id_0> <pad> <extra_id_1> <pad> <extra_id_2> </s>
cT5: <extra_id_0> quick <pad> <extra_id_1> over <pad> <extra_id_2> </s>
cT5: <extra_id_0> quick brown <pad> <extra_id_1> over </c> <extra_id_2> </s>
cT5: <extra_id_0> quick brown </c> <extra_id_1> over </c> <extra_id_2> </s>
```
In the original T5, the decoder is called \\(n_s + 1 + \sum_i |s_i|\\) times autoregressively,
where \\(n_s\\) is the number of sentinel tokens and \\(s_1,...,s_{n_s}\\) are the predicted chunks.
In contrast, cT5's decoder is called just \\(max_i |s_i| + 1\\) times.
The generation stops when all sentences were fully translated to complete chunks, i.e., until all `</c>` tokens were generated.
Alternatively, you can also set `max_chunk_size` to manually force the model to stop after generating a chunk with `max_chunk_size` tokens.
The overhead of calling the decoder with a longer input is less pronounced since this computation can be parallelized in GPUs/TPUs.
## Training details
cT5 models used T5's weights as a starting point, and then it was finetuned on the
English [wikipedia](https://huggingface.co/datasets/wikipedia) for 3 epochs,
achieving ~74% validation accuracy (ct5-small).
The training script is in JAX + Flax and can be found in `pretrain_ct5.py`.
Flax checkpoints can be converted to PyTorch via `convert_flax_to_pytorch.py [flax_dirname]`.
## Checkpoints
- ct5-small: https://huggingface.co/mtreviso/ct5-small-en-wiki
- ct5-base: todo
- ct5-large: todo
## Usage
```python
from transformers import AutoTokenizer
from modeling_ct5 import CT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("mtreviso/ct5-small-en-wiki")
model = CT5ForConditionalGeneration.from_pretrained("mtreviso/ct5-small-en-wiki")
```
For training:
```python
input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
labels = tokenizer("<extra_id_0> man </c> <extra_id_1> the </c> <extra_id_2>", return_tensors="pt").input_ids
outputs = model(input_ids=input_ids, labels=labels)
loss = outputs.loss
logits = outputs.logits
```
For generation:
```python
texts = [
"The <extra_id_0> walks in <extra_id_1> park",
"UN Chief says there is no way to <extra_id_0> in Syria",
]
input_ids = tokenizer(texts, return_tensors="pt", padding=True).input_ids
generated_ids = model.generate(
input_ids,
use_cache=False, # important to set to False to avoid caching
eoc_token_id=tokenizer.vocab['</c>'], # important to set to the correct end-of-chunk id
max_chunk_size=5, # the default is 9999999, which is a large number
)
```
This will produce the following tokens:
```python
>> ['<pad>', '<extra_id_0>', '▁Walking', '▁Trail', '</c>', '<extra_id_1>', '▁the', '</c>', '<extra_id_2>', '</s>']
>> ['<pad>', '<extra_id_0>', '▁treat', '▁Syria', '</c>', '<extra_id_1>', '</s>', '<pad>', '<pad>', '<pad>']
```
You have to pass `use_cache=False` to `generate()` in order to avoid caching during the generation procedure as caching is not available for parallel decoding.
Currently, parallel decoding is only supported for PyTorch (greedy search, greedy sampling, beam search, beam sampling) and JAX (greedy search and greedy sampling).
**Note on the beam search implementation**: my beam search implementation is slower than optimal.
This is because I use the structures provided by HuggingFace's implementation, namely, BeamScores and BeamHypotheses to store the beam search results for each chunk in the input.
In other words, my implementation computes independent "beams" for each chunk rather than for each input sequence.
It is possible to make it faster by using a custom BeamScores and BeamHypotheses class, but I haven't done that yet.
## Evaluation
See the notebook `evaluate_ct5.ipynb` for an example of how to evaluate cT5 in terms of accuracy and perplexity.
The notebook `profile.ipynb` shows how to profile the model to get runtimes.
Here is a comparison between cT5-small and T5-small on a subset of the WikiText-103 dataset using deterministic greedy search:
| Model | Exact match ↑ | Edit distance ratio ↑ | Perplexity ↓ | Time (seconds) ↓ |
|-------|---------------|----------------------|--------------|-----------------|
| T5-small | 0.11 | 0.60 | 2.22 | 44.71 |
| cT5-small | 0.09 | 0.58 | 1.48 | 10.63 |
On this toy dataset, cT5-small has a lower perplexity while being faster than T5-small. However, more experiments are needed for a rigorous evaluation.
If you are interested in applying cT5 to real data, please contact me.
|
AndyChiang/my-test-model | e6392a97d572dd50121bb398803a008dc230bb60 | 2022-07-21T08:08:01.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"transformers",
"generated_from_keras_callback",
"model-index"
]
| text-classification | false | AndyChiang | null | AndyChiang/my-test-model | 12 | 1 | transformers | 10,886 | ---
tags:
- generated_from_keras_callback
model-index:
- name: my-test-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-test-model
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Krs/distilbert-base-uncased-finetuned-emotion | a58bdb9584d6394c59cfcb610a707a8860049241 | 2022-07-22T08:08:46.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Krs | null | Krs/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,887 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.9213674244320441
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2197
- Accuracy: 0.921
- F1: 0.9214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8255 | 1.0 | 250 | 0.3172 | 0.9055 | 0.9039 |
| 0.2506 | 2.0 | 500 | 0.2197 | 0.921 | 0.9214 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
RohanKapur3/test-model | 1524d0ce942cead01996aec6f22b968bb147abf0 | 2022-07-22T05:56:44.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | RohanKapur3 | null | RohanKapur3/test-model | 12 | null | transformers | 10,888 | Entry not found |
jinwooChoi/SKKU_KDW_SA_0722_2 | a8055dce9a20202e559d6e5ba197a0910d4753c9 | 2022-07-25T06:42:57.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_KDW_SA_0722_2 | 12 | null | transformers | 10,889 | Entry not found |
erikanesse/test-trainer-gbb-7 | 405fbaf0e420370c557cc3f761e22b3e4b28b78a | 2022-07-22T17:48:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | erikanesse | null | erikanesse/test-trainer-gbb-7 | 12 | null | transformers | 10,890 | Entry not found |
domenicrosati/deberta-v3-large-finetuned-DAGPap22-synthetic-all | 0c344a01add2bcf5cc99677bb2f33fa45cbd84c5 | 2022-07-23T10:13:32.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers"
]
| text-classification | false | domenicrosati | null | domenicrosati/deberta-v3-large-finetuned-DAGPap22-synthetic-all | 12 | null | transformers | 10,891 | Entry not found |
huggingtweets/hillaryclinton-maddow-speakerpelosi | 1831c005cf7c446a1bb3e554c6b0affe7cbefc89 | 2022-07-22T23:16:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/hillaryclinton-maddow-speakerpelosi | 12 | 1 | transformers | 10,892 | ---
language: en
thumbnail: http://www.huggingtweets.com/hillaryclinton-maddow-speakerpelosi/1658531793071/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/59437078/icon-200x200_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1114294290375688193/P9mcJNGb_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1291192333199958017/SvH8J8_P_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rachel Maddow MSNBC & Nancy Pelosi & Hillary Clinton</div>
<div style="text-align: center; font-size: 14px;">@hillaryclinton-maddow-speakerpelosi</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rachel Maddow MSNBC & Nancy Pelosi & Hillary Clinton.
| Data | Rachel Maddow MSNBC | Nancy Pelosi | Hillary Clinton |
| --- | --- | --- | --- |
| Tweets downloaded | 3249 | 3250 | 3247 |
| Retweets | 1848 | 277 | 789 |
| Short tweets | 254 | 2 | 63 |
| Tweets kept | 1147 | 2971 | 2395 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/329g8cj3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hillaryclinton-maddow-speakerpelosi's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/149xp72s) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/149xp72s/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hillaryclinton-maddow-speakerpelosi')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Shenzy/Sentence_Classification4DesignTutor | 34e4ef5afee7b3f08e234e57afd8bfe77351113d | 2022-07-26T03:25:26.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:Shenzy/autotrain-data-sentence_classification",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | Shenzy | null | Shenzy/Sentence_Classification4DesignTutor | 12 | null | transformers | 10,893 | ---
tags: autotrain
language: en
widget:
- text: "An unusual hierarchy in the section near the top where the design seems to prioritise running time over a compacted artist name."
datasets:
- Shenzy/autotrain-data-sentence_classification
co2_eq_emissions: 0.00986494387043499
---
## Validation Metrics
- Loss: 0.6447726488113403
- Accuracy: 0.8263473053892215
- Macro F1: 0.7776555055392036
- Micro F1: 0.8263473053892215
- Weighted F1: 0.8161511591973788
- Macro Precision: 0.8273504273504274
- Micro Precision: 0.8263473053892215
- Weighted Precision: 0.8266697374481806
- Macro Recall: 0.7615518744551003
- Micro Recall: 0.8263473053892215
- Weighted Recall: 0.8263473053892215
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "An unusual hierarchy in the section near the top where the design seems to prioritise running time over a compacted artist name."}' https://api-inference.huggingface.co/models/Shenzy/Sentence_Classification4DesignTutor
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import numpy as np
labdic ={ 0: "rationale", 1: "suggestion", 2: "specific_critique"}
model = AutoModelForSequenceClassification.from_pretrained("Shenzy/Sentence_Classification4DesignTutor")
tokenizer = AutoTokenizer.from_pretrained("Shenzy/Sentence_Classification4DesignTutor")
inputs = tokenizer("An unusual hierarchy in the section near the top where the design seems to prioritise running time over a compacted artist name.", return_tensors="pt")
outputs = model(**inputs)
print(labdic[np.argmax(outputs)])
``` |
phamvanlinh143/bert-fine-tuned-cola | 303716edca581322e47999a19a10bf70c00c19a5 | 2022-07-24T17:40:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | phamvanlinh143 | null | phamvanlinh143/bert-fine-tuned-cola | 12 | null | transformers | 10,894 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-fine-tuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5675682416159784
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8760
- Matthews Correlation: 0.5676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4768 | 1.0 | 1069 | 0.5682 | 0.5183 |
| 0.3134 | 2.0 | 2138 | 0.6110 | 0.5789 |
| 0.1627 | 3.0 | 3207 | 0.8760 | 0.5676 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Bogula/samsum-512 | 957020bd297916fe52ebd5932faeca6fddbcd218 | 2022-07-25T21:34:21.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Bogula | null | Bogula/samsum-512 | 12 | null | transformers | 10,895 | smaller version of Samsum fine-tuning on CNN/DailyMail-Pegasus
512 token input / 64 token output
(reduced due to memory shortage on Colab) |
d2niraj555/distilbert-base-uncased-finetuned-emotion | d9e838a13db9e85bf9a9fecd59eb4c07d1c5882b | 2022-07-27T17:24:50.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | d2niraj555 | null | d2niraj555/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,896 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9241328800048197
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2133
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8087 | 1.0 | 250 | 0.3067 | 0.905 | 0.9030 |
| 0.2439 | 2.0 | 500 | 0.2133 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
BramVanroy/bert-base-multilingual-cased-hebban-reviews | d385a288dd48791c869c6a77a7f30123fffc0919 | 2022-07-29T09:41:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"nl",
"dataset:BramVanroy/hebban-reviews",
"transformers",
"sentiment-analysis",
"dutch",
"text",
"license:mit",
"model-index"
]
| text-classification | false | BramVanroy | null | BramVanroy/bert-base-multilingual-cased-hebban-reviews | 12 | null | transformers | 10,897 | ---
datasets:
- BramVanroy/hebban-reviews
language:
- nl
license: mit
metrics:
- accuracy
- f1
- precision
- qwk
- recall
model-index:
- name: bert-base-multilingual-cased-hebban-reviews
results:
- dataset:
config: filtered_sentiment
name: BramVanroy/hebban-reviews - filtered_sentiment - 2.0.0
revision: 2.0.0
split: test
type: BramVanroy/hebban-reviews
metrics:
- name: Test accuracy
type: accuracy
value: 0.7764792899408284
- name: Test f1
type: f1
value: 0.7821329848271866
- name: Test precision
type: precision
value: 0.7907660190770787
- name: Test qwk
type: qwk
value: 0.6813121109021326
- name: Test recall
type: recall
value: 0.7764792899408284
task:
name: sentiment analysis
type: text-classification
tags:
- sentiment-analysis
- dutch
- text
widget:
- text: Wauw, wat een leuk boek! Ik heb me er er goed mee vermaakt.
- text: Nee, deze vond ik niet goed. De auteur doet zijn best om je als lezer mee
te trekken in het verhaal maar mij overtuigt het alleszins niet.
- text: Ik vind het niet slecht maar de schrijfstijl trekt me ook niet echt aan. Het
wordt een beetje saai vanaf het vijfde hoofdstuk
---
# bert-base-multilingual-cased-hebban-reviews
# Dataset
- dataset_name: BramVanroy/hebban-reviews
- dataset_config: filtered_sentiment
- dataset_revision: 2.0.0
- labelcolumn: review_sentiment
- textcolumn: review_text_without_quotes
# Training
- optim: adamw_hf
- learning_rate: 5e-05
- per_device_train_batch_size: 64
- per_device_eval_batch_size: 64
- gradient_accumulation_steps: 1
- max_steps: 5001
- save_steps: 500
- metric_for_best_model: qwk
# Best checkedpoint based on validation
- best_metric: 0.6828581526810108
- best_model_checkpoint: trained/hebban-reviews/bert-base-multilingual-cased/checkpoint-1500
# Test results of best checkpoint
- accuracy: 0.7764792899408284
- f1: 0.7821329848271866
- precision: 0.7907660190770787
- qwk: 0.6813121109021326
- recall: 0.7764792899408284
## Confusion matric

## Normalized confusion matrix

# Environment
- cuda_capabilities: 8.0; 8.0
- cuda_device_count: 2
- cuda_devices: NVIDIA A100-SXM4-80GB; NVIDIA A100-SXM4-80GB
- finetuner_commit: 66294c815326c93682003119534cb72009f558c2
- platform: Linux-4.18.0-305.49.1.el8_4.x86_64-x86_64-with-glibc2.28
- python_version: 3.9.5
- toch_version: 1.10.0
- transformers_version: 4.21.0
|
yanaiela/roberta-base-epoch_81 | 3feb9aef44461a6a63f27dfe729e13c2e09f995c | 2022-07-29T23:09:21.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_81",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_81 | 12 | null | transformers | 10,898 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_81
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 81
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_81.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
Evelyn18/roberta-base-spanish-squades-becasIncentivos6 | 8c10020ced5ecf4c4e2b15b27c6560ea0674bebb | 2022-07-28T21:38:04.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | Evelyn18 | null | Evelyn18/roberta-base-spanish-squades-becasIncentivos6 | 12 | null | transformers | 10,899 | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-becasIncentivos6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-becasIncentivos6
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 2.2257 |
| No log | 2.0 | 6 | 1.8301 |
| No log | 3.0 | 9 | 1.7627 |
| No log | 4.0 | 12 | 1.8773 |
| No log | 5.0 | 15 | 1.9731 |
| No log | 6.0 | 18 | 2.0023 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.