modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
gary109/STAS_yolos-base | cfcf08d977d2435f674d2f0971aa6f5d401972a8 | 2022-05-13T22:38:04.000Z | [
"pytorch",
"yolos",
"object-detection",
"transformers"
]
| object-detection | false | gary109 | null | gary109/STAS_yolos-base | 13 | null | transformers | 10,300 | Entry not found |
gonzpen/gbert-large-ft-edu-redux | 287f806c3d601de2c4c606d389cf5185f17e1903 | 2022-05-13T11:23:44.000Z | [
"pytorch",
"bert",
"text-classification",
"de",
"transformers",
"license:mit"
]
| text-classification | false | gonzpen | null | gonzpen/gbert-large-ft-edu-redux | 13 | null | transformers | 10,301 | ---
language: de
license: mit
---
# German BERT large fine-tuned to predict educational requirements
This is a fine-tuned version of the German BERT large language model [deepset/gbert-large](https://huggingface.co/deepset/gbert-large). The multilabel task this model was trained on was to predict education requirements from job ad texts. The dataset used for training is not available to the public. The 7 labels in the task are (in the classification head order):
- `'Bachelor'`
- `'Berufsausbildung'`
- `'Doktorat oder äquivalent'`
- `'Höhere Berufsausbildung'`
- `'Master'`
- `'Sonstiges'`
- `'keine Ausbildungserfordernisse'`
The number of representatives of these labels in each of the splits (train/test/val) of the dataset is summarized in the following table:
| Label name | All data | Training | Validation | Test |
|------------|----------|----------|------------|------|
| Bachelor | 521 | 365 | 52 | 104 |
| Berufsausbildung | 1854 | 1298 | 185 | 371 |
| Doktorat oder äquivalent | 38 | 27 | 4 | 7 |
| Höhere Berufsausbildung | 564 | 395 | 56 | 113 |
| Master | 245 | 171 | 25 | 49 |
| Sonstiges | 819 | 573 | 82 | 164 |
| keine Ausbildungserfordernisse | 176 | 123 | 18 | 35 |
## Performance
Training consisted of [minimizing the binary cross-entropy (BCE)](https://en.wikipedia.org/wiki/Cross_entropy#Cross-entropy_minimization) loss between the model's predictions and the actual labels in the training set. During training, a weighted version of the [label ranking average precision (LRAP)](https://scikit-learn.org/stable/modules/model_evaluation.html#label-ranking-average-precision) was tracked for the testing set. LRAP measures what fraction of higher-ranked labels produced by the model were true labels. To account for the label imbalance, the rankings were weighted so that improperly ranked rare labels are penalized more than their more frequent counterparts. After training was complete, the model with highest weighted LRAP was saved.
```
LRAP: 0.96
```
# See also:
- [deepset/gbert-base](https://huggingface.co/deepset/gbert-base)
- [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
- [gonzpen/gbert-base-ft-edu-redux](https://huggingface.co/gonzpen/gbert-base-ft-edu-redux)
## Authors
Rodrigo C. G. Pena: `rodrigocgp [at] gmail.com`
|
tbosse/bert-base-german-cased-noisy-pretrain-fine-tuned | dfd2979f3fcc8c3f39cdd3d4c208e4d8d6055e37 | 2022-05-17T17:07:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-noisy-pretrain-fine-tuned | 13 | null | transformers | 10,302 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-noisy-pretrain-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-noisy-pretrain-fine-tuned
This model is a fine-tuned version of [tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData](https://huggingface.co/tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2925
- Precision: 0.7933
- Recall: 0.7457
- F1: 0.7688
- Accuracy: 0.9147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 33 | 0.3093 | 0.7456 | 0.6029 | 0.6667 | 0.8808 |
| No log | 2.0 | 66 | 0.2587 | 0.7774 | 0.7286 | 0.7522 | 0.9078 |
| No log | 3.0 | 99 | 0.2529 | 0.7775 | 0.7686 | 0.7730 | 0.9136 |
| No log | 4.0 | 132 | 0.2598 | 0.8063 | 0.7257 | 0.7639 | 0.9147 |
| No log | 5.0 | 165 | 0.2783 | 0.7927 | 0.7429 | 0.7670 | 0.9159 |
| No log | 6.0 | 198 | 0.2899 | 0.8019 | 0.74 | 0.7697 | 0.9165 |
| No log | 7.0 | 231 | 0.2925 | 0.7933 | 0.7457 | 0.7688 | 0.9147 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
nreimers/mmarco-mMiniLMv2-L6-H384-v1 | 4ceabf2d1e212e16da0d1fb94d5dea66a9a1cca0 | 2022-05-20T07:39:37.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | nreimers | null | nreimers/mmarco-mMiniLMv2-L6-H384-v1 | 13 | null | transformers | 10,303 | Entry not found |
sanjay-m1/active-to-passive | 7e6ae970fa462f96f314c59789bdf711d2c69ed8 | 2022-05-21T18:23:14.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | sanjay-m1 | null | sanjay-m1/active-to-passive | 13 | null | transformers | 10,304 | ## This model belongs to the Styleformer project
[Please refer to github page](https://github.com/PrithivirajDamodaran/Styleformer)
|
sanjay-m1/passive-to-active | 671b6b548ce50b6f9d1589fc71a6a2ebe9c4ecd6 | 2022-05-21T18:32:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | sanjay-m1 | null | sanjay-m1/passive-to-active | 13 | null | transformers | 10,305 | ## This model belongs to the Styleformer project
[Please refer to github page](https://github.com/PrithivirajDamodaran/Styleformer)
|
XeSaad/bert-finetuned-ner | bde0b24e33c2b90720cc0c6e6cef72b3e805e433 | 2022-05-24T12:48:15.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | XeSaad | null | XeSaad/bert-finetuned-ner | 13 | null | transformers | 10,306 | Entry not found |
aakorolyova/outcome_significance_relation | 5d8d320a6379a25b02f6b72c0adbc432349eed24 | 2022-05-25T19:13:54.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | aakorolyova | null | aakorolyova/outcome_significance_relation | 13 | null | transformers | 10,307 | <h1>Model description</h1>
This is a fine-tuned BioBERT model for extracting the relation between clinical trial outcome and its significance level. The task is framed as sentence classification:
- you first need to extract the entities - outcomes and significance levels. For outcomes, you could use the model https://huggingface.co/aakorolyova/reported_outcome_extraction. For significance levels, we have previously used a rule-based approach that worked well; we plan to make the code available in https://github.com/aakorolyova/DeSpin-2.0 soon.
- then, for each pair of outcome and significance level, you mask the entity texts as @OUTCOME$ and @SIGNIFICANCE$
- you run the prediction on the sentence with the masked outcome-significance level pair to get the label (0 if the entities are unrelated, 1 if they are related).
For example, the sentence "Intubation conditions (succinylcholine 8.3 ± 0.8; rocuronium 8.2 ± 0.9; P = 0.7) and failed first intubation attempts (succinylcholine 32/200; rocuronium 36/201; P = 1.0) did not differ between the groups." contains several outcomes ("Intubation conditions", "failed first intubation attempts") and significance levels ("P = 0.7", "P = 1.0"). Masked sentence for each pair and the expected label are as follows:
```
@OUTCOME$ (succinylcholine 8.3 ± 0.8; rocuronium 8.2 ± 0.9; @SIGNIFICANCE$) and failed first intubation attempts (succinylcholine 32/200; rocuronium 36/201; P = 1.0) did not differ between the groups. 1
@OUTCOME$ (succinylcholine 8.3 ± 0.8; rocuronium 8.2 ± 0.9; P = 0.7) and failed first intubation attempts (succinylcholine 32/200; rocuronium 36/201; @SIGNIFICANCE$) did not differ between the groups. 0
Intubation conditions (succinylcholine 8.3 ± 0.8; rocuronium 8.2 ± 0.9; P = 0.7) and @OUTCOME$ (succinylcholine 32/200; rocuronium 36/201; @SIGNIFICANCE$) did not differ between the groups. 1
Intubation conditions (succinylcholine 8.3 ± 0.8; rocuronium 8.2 ± 0.9; @SIGNIFICANCE$) and @OUTCOME$ (succinylcholine 32/200; rocuronium 36/201; P = 1.0) did not differ between the groups. 0
```
This is the second version of the model; the original model development was reported in:
Anna Koroleva, Patrick Paroubek. Extracting relations between outcome and significance level in Randomized Controlled Trials (RCTs) publications. Proceedings of ACL BioNLP workshop, 2019 https://aclanthology.org/W19-5038/
The original work was conducted within the scope of the Assisted authoring for avoiding inadequate claims in scientific reporting PhD project of the Methods for Research on Research (MiRoR, http://miror-ejd.eu/) program.
Model creator: Anna Koroleva
<h1>Intended uses & limitations</h1>
The model was originally intended to be used as a part of spin (unjustified presentation of trial results) detection pipeline in articles reporting Randomised controlled trials (see Anna Koroleva, Sanjay Kamath, Patrick MM Bossuyt, Patrick Paroubek. DeSpin: a prototype system for detecting spin in biomedical publications. Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing. https://aclanthology.org/2020.bionlp-1.5/). It can also be used separately, for predicting outcome - significance level relation.
The main limitation is that the model was trained on a fairly small sample of data annotated by a single annotator. Annotating more data or involvig more annotators was not possible within the PhD project.
<h1>How to use</h1>
The model should be used with the BioBERT tokeniser. A sample code for getting model predictions is below:
```
import numpy as np
from transformers import AutoModelForTokenClassification
from transformers import AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('dmis-lab/biobert-v1.1')
model = AutoModelForSequenceClassification.from_pretrained("aakorolyova/outcome_significance_relation")
text1 = "@OUTCOME$ (succinylcholine 8.3 ± 0.8; rocuronium 8.2 ± 0.9; @SIGNIFICANCE$) and failed first intubation attempts (succinylcholine 32/200; rocuronium 36/201; P = 1.0) did not differ between the groups."
text2 = "@OUTCOME$ (succinylcholine 8.3 ± 0.8; rocuronium 8.2 ± 0.9; P = 0.7) and failed first intubation attempts (succinylcholine 32/200; rocuronium 36/201; @SIGNIFICANCE$) did not differ between the groups."
tokenized_input1 = tokenizer(text1, padding="max_length", truncation=True, return_tensors='pt')
output1 = model(**tokenized_input1)['logits']
output1 = np.argmax(output1.detach().numpy(), axis=1)
print(output1)
tokenized_input2 = tokenizer(text2, padding="max_length", truncation=True, return_tensors='pt')
output2 = model(**tokenized_input2)['logits']
output2 = np.argmax(output2.detach().numpy(), axis=1)
print(output2)
```
Some more useful functions can be found in or Github repository: https://github.com/aakorolyova/DeSpin-2.0
<h1>Training data</h1>
Training data can be found in https://github.com/aakorolyova/DeSpin-2.0/tree/main/data/Outcome_significance_relation
<h1>Training procedure</h1>
The model was fine-tuned using Huggingface Trainer API. Training scripts can be found in https://github.com/aakorolyova/DeSpin-2.0
<h1>Evaluation</h1>
Precision: 94.96%
Recall: 96.35%
F1: 95.65%
|
abdulmatinomotoso/emotion_detection_finetuned_distilbert | 7949bab8e24407203c510a9a456db75cea57e9f0 | 2022-05-25T15:55:04.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | abdulmatinomotoso | null | abdulmatinomotoso/emotion_detection_finetuned_distilbert | 13 | null | transformers | 10,308 | Entry not found |
huggingtweets/rumi_quote | 458ad09c67505eaded84e02e9b1198638245ba4d | 2022-06-20T19:20:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/rumi_quote | 13 | null | transformers | 10,309 | ---
language: en
thumbnail: http://www.huggingtweets.com/rumi_quote/1655752799916/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/477092904758808577/3RrEtx04_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rumi</div>
<div style="text-align: center; font-size: 14px;">@rumi_quote</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rumi.
| Data | Rumi |
| --- | --- |
| Tweets downloaded | 3197 |
| Retweets | 29 |
| Short tweets | 24 |
| Tweets kept | 3144 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1rvs1ymy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rumi_quote's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/cd1jhcf5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/cd1jhcf5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rumi_quote')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kktoto/ty_punctuator | 4003fe5acfb6dbaa0457c02e0a777cce8e68e400 | 2022-05-28T07:42:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | kktoto | null | kktoto/ty_punctuator | 13 | null | transformers | 10,310 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ty_punctuator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ty_punctuator
This model is a fine-tuned version of [kktoto/kt_punc](https://huggingface.co/kktoto/kt_punc) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0937
- Precision: 0.7436
- Recall: 0.7694
- F1: 0.7563
- Accuracy: 0.9656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0967 | 1.0 | 5561 | 0.0937 | 0.7436 | 0.7694 | 0.7563 | 0.9656 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tbosse/bert-base-german-cased-noisy-pretrain-fine-tuned_v2 | 1f93814356cd35b296febea4dc8897d575002943 | 2022-05-29T23:53:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-noisy-pretrain-fine-tuned_v2 | 13 | null | transformers | 10,311 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-noisy-pretrain-fine-tuned_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-noisy-pretrain-fine-tuned_v2
This model is a fine-tuned version of [tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v2](https://huggingface.co/tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2872
- Precision: 0.7870
- Recall: 0.76
- F1: 0.7733
- Accuracy: 0.9159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 33 | 0.3105 | 0.7731 | 0.5743 | 0.6590 | 0.8813 |
| No log | 2.0 | 66 | 0.2632 | 0.7588 | 0.7371 | 0.7478 | 0.9055 |
| No log | 3.0 | 99 | 0.2517 | 0.7630 | 0.7543 | 0.7586 | 0.9096 |
| No log | 4.0 | 132 | 0.2590 | 0.8145 | 0.74 | 0.7754 | 0.9171 |
| No log | 5.0 | 165 | 0.2665 | 0.7939 | 0.7486 | 0.7706 | 0.9165 |
| No log | 6.0 | 198 | 0.2854 | 0.7951 | 0.7429 | 0.7681 | 0.9147 |
| No log | 7.0 | 231 | 0.2872 | 0.7870 | 0.76 | 0.7733 | 0.9159 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jkhan447/sarcasm-detection-Bert-base-uncased | 0a9fcd1015b94bd3e9d84bdf0c902635b7db08c5 | 2022-05-30T07:48:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jkhan447 | null | jkhan447/sarcasm-detection-Bert-base-uncased | 13 | null | transformers | 10,312 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-Bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-Bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0623
- Accuracy: 0.7127
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
eugenecamus/distilbert-imdb-demo | 82392d47b4b8fc48fcfcd192ca0a86fb65c31e3b | 2022-06-02T05:17:58.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | eugenecamus | null | eugenecamus/distilbert-imdb-demo | 13 | null | transformers | 10,313 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-imdb-demo
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.928
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb-demo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4328
- Accuracy: 0.928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3459 | 1.0 | 2657 | 0.2362 | 0.9091 |
| 0.1612 | 2.0 | 5314 | 0.2668 | 0.9248 |
| 0.0186 | 3.0 | 7971 | 0.3274 | 0.9323 |
| 0.1005 | 4.0 | 10628 | 0.3978 | 0.9277 |
| 0.0006 | 5.0 | 13285 | 0.4328 | 0.928 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
osanseviero/my-helsinki-duplicate | 11e799d173fdca909d0bf1d3613c140552737ad5 | 2022-06-01T15:58:23.000Z | [
"pytorch",
"rust",
"marian",
"text2text-generation",
"zh",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | osanseviero | null | osanseviero/my-helsinki-duplicate | 13 | null | transformers | 10,314 | ---
language:
- zh
- en
tags:
- translation
license: apache-2.0
---
### zho-eng
* source group: Chinese
* target group: English
* OPUS readme: [zho-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-eng/README.md)
* model: transformer
* source language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.zip)
* test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.test.txt)
* test set scores: [opus-2020-07-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.eng | 36.1 | 0.548 |
### System Info:
- hf_name: zho-eng
- source_languages: zho
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'en']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.test.txt
- src_alpha3: zho
- tgt_alpha3: eng
- short_pair: zh-en
- chrF2_score: 0.5479999999999999
- bleu: 36.1
- brevity_penalty: 0.948
- ref_len: 82826.0
- src_name: Chinese
- tgt_name: English
- train_date: 2020-07-17
- src_alpha2: zh
- tgt_alpha2: en
- prefer_old: False
- long_pair: zho-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
dsghrg/bert-finetuned-ner | 5bc7111cd25a9f929a9385f6068134b748d7db5f | 2022-06-02T08:18:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | dsghrg | null | dsghrg/bert-finetuned-ner | 13 | null | transformers | 10,315 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.933895223929929
- name: Recall
type: recall
value: 0.9510265903736116
- name: F1
type: f1
value: 0.9423830567831235
- name: Accuracy
type: accuracy
value: 0.9863572143403779
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0646
- Precision: 0.9339
- Recall: 0.9510
- F1: 0.9424
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0864 | 1.0 | 1756 | 0.0659 | 0.9161 | 0.9372 | 0.9265 | 0.9830 |
| 0.0403 | 2.0 | 3512 | 0.0616 | 0.9271 | 0.9483 | 0.9376 | 0.9855 |
| 0.0199 | 3.0 | 5268 | 0.0646 | 0.9339 | 0.9510 | 0.9424 | 0.9864 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
EventMiner/bigbird-roberta-large-en-doc | 805035c55952661cee2aec2c7bf2c235e7a56c4d | 2022-06-19T15:24:06.000Z | [
"pytorch",
"big_bird",
"text-classification",
"en",
"transformers",
"news event detection",
"document level",
"EventMiner",
"license:apache-2.0"
]
| text-classification | false | EventMiner | null | EventMiner/bigbird-roberta-large-en-doc | 13 | null | transformers | 10,316 | ---
language: en
tags:
- news event detection
- document level
- EventMiner
license: apache-2.0
---
# EventMiner
EventMiner is designed for multilingual news event detection. The goal of news event detection is the automatic extraction of event details from news articles. This event extraction can be done at different levels: document, sentence and word ranging from coarse-granular information to fine-granular information.
We submitted the best results based on EventMiner to [CASE 2021 shared task 1: *Multilingual Protest News Detection*](https://competitions.codalab.org/competitions/31247). Our approach won first place in English for the document level task while ranking within the top four solutions for other languages: Portuguese, Spanish, and Hindi.
*EventMiner/bigbird-roberta-large-en-doc* is a bigbird-roberta-large sequence classification model fine-tuned on English document level data of the multilingual version of GLOCON gold standard dataset released with [CASE 2021](https://aclanthology.org/2021.case-1.11/). <br>
Labels:
- Label_0: News article does not contain information about a past or ongoing socio-political event
- Label_1: News article contains information about a past or ongoing socio-political event
More details about the training procedure are available with our [codebase](https://github.com/HHansi/EventMiner).
# How to Use
## Load Model
```python
from transformers import BigBirdTokenizer, BigBirdForSequenceClassification
model_name = 'EventMiner/bigbird-roberta-large-en-doc'
tokenizer = BigBirdTokenizer.from_pretrained(model_name)
model = BigBirdForSequenceClassification.from_pretrained(model_name)
```
## Classification
```python
from transformers import pipeline
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
classifier("Police arrested five more student leaders on Monday when implementing the strike call given by MSU students union as a mark of protest against the decision to introduce payment seats in first-year commerce programme.")
```
# Citation
If you use this model, please consider citing the following paper.
```
@inproceedings{hettiarachchi-etal-2021-daai,
title = "{DAAI} at {CASE} 2021 Task 1: Transformer-based Multilingual Socio-political and Crisis Event Detection",
author = "Hettiarachchi, Hansi and
Adedoyin-Olowe, Mariam and
Bhogal, Jagdev and
Gaber, Mohamed Medhat",
booktitle = "Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.case-1.16",
doi = "10.18653/v1/2021.case-1.16",
pages = "120--130",
}
``` |
Classroom-workshop/assignment1-francesco | 70430106f9e86432f099371956a1140331046d86 | 2022-06-02T15:25:05.000Z | [
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"arxiv:1904.08779",
"transformers",
"speech",
"audio",
"hf-asr-leaderboard",
"license:mit",
"model-index"
]
| automatic-speech-recognition | false | Classroom-workshop | null | Classroom-workshop/assignment1-francesco | 13 | null | transformers | 10,317 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: s2t-small-librispeech-asr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 4.3
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 9.0
---
# S2T-SMALL-LIBRISPEECH-ASR
`s2t-small-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard
autoregressive cross-entropy loss and generates the transcripts autoregressively.
## Intended uses & limitations
This model can be used for end-to-end speech recognition (ASR).
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
*Note: The feature extractor depends on [torchaudio](https://github.com/pytorch/audio) and the tokenizer depends on [sentencepiece](https://github.com/google/sentencepiece)
so be sure to install those packages before running the examples.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
input_features = processor(
ds[0]["audio"]["array"],
sampling_rate=16_000,
return_tensors="pt"
).input_features # Batch size 1
generated_ids = model.generate(input_ids=input_features)
transcription = processor.batch_decode(generated_ids)
```
#### Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
*"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset, load_metric
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
wer = load_metric("wer")
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True)
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(predictions=result["transcription"], references=result["text"]))
```
*Result (WER)*:
| "clean" | "other" |
|:-------:|:-------:|
| 4.3 | 9.0 |
## Training data
The S2T-SMALL-LIBRISPEECH-ASR is trained on [LibriSpeech ASR Corpus](https://www.openslr.org/12), a dataset consisting of
approximately 1000 hours of 16kHz read English speech.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively.
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
``` |
OneFly/distilbert-base-uncased-finetuned-emotion | 6024e4b827ca2df6b042d0fd89325e33b760bc6c | 2022-06-02T16:28:55.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | OneFly | null | OneFly/distilbert-base-uncased-finetuned-emotion | 13 | null | transformers | 10,318 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- name: F1
type: f1
value: 0.9279829352545553
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2108
- Accuracy: 0.928
- F1: 0.9280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8434 | 1.0 | 250 | 0.3075 | 0.9085 | 0.9058 |
| 0.2472 | 2.0 | 500 | 0.2108 | 0.928 | 0.9280 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
rbawden/CCASS-auto-titrages-base | e2bd0cc6a7be49ee2806c3594752d2233645c4ff | 2022-07-05T21:42:01.000Z | [
"pytorch",
"fsmt",
"fr",
"transformers",
"license:cc-by-4.0"
]
| null | false | rbawden | null | rbawden/CCASS-auto-titrages-base | 13 | null | transformers | 10,319 | ---
language: fr
license: cc-by-4.0
---
# Cour de Cassation automatic *titrage* prediction model
Model for the automatic prediction of *titrages* (keyword sequence) from *sommaires* (synthesis of legal cases). The models are described in [this paper](https://hal.inria.fr/hal-03663110/file/LREC_2022___CCass_Inria-camera-ready.pdf). If you use this model, please cite our research paper (see [below](#cite)).
## Model description
The model is a transformer-base model trained on parallel data (sommaires-titrages) provided by the Cour de Cassation. The model was intially trained using the Fairseq toolkit, converted to HuggingFace and then fine-tuned on the original training data to smooth out minor differences that arose during the conversion process. Tokenisation is performed using a SentencePiece model, the BPE strategy and a vocab size of 8000.
### Intended uses & limitations
This model is to be used to produce *titrages* for those *sommaires* that do not have them or to complement existing (manually) created *titrages*.
### How to use
Model input is the *matière* (matter) concatenated to the text from the sommaire separated by the token `<t>`. Each example should be on a single line. E.g. `bail <t> La recommendation du tribunal selon l'article...` (fictive example for illustrative purposes. The maximum input length of the model is 1024 input tokens (after tokenisation).
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokeniser = AutoTokenizer.from_pretrained("rbawden/CCASS-auto-titrages-base")
model = AutoModelForSeq2SeqLM.from_pretrained("rbawden/CCASS-auto-titrages-base")
matiere = "matter"
sommaire = "full text from the sommaire on a single line"
inputs = tokeniser([matiere + " <t> " + sommaire], return_tensors='pt')
outputs = model.generate(inputs['input_ids'])
tokeniser.batch_decode(outputs, skip_special_tokens=True, clean_up_tokenisation_spaces=True)
```
### Limitations and bias
The models' predictions should not be taken as ground-truth *titrages* and should always be indicated as being automatically generated. They were designed not to be used as such, but to improve search coverage for improved similarity prediction between different cases (the predicted *titrages* being used to predict the similarity).
The model is not constrained to predict *titres* that have previously been seen, so this should be taken into account in the deployment of this model as a *titrage* tool in order to avoid the multiplication of different *titres*.
## Training data
Training data is provided by the Cour de Cassation (the original source being Jurinet data, but with pseudo-anonymisation applied). For training, we use a total of 159,836 parallel examples (each example is a sommaire-titrage pair). Our development data consists of 1,833 held-out examples.
## Training procedure
### Preprocessing
We use SentencePiece, the BPE strategy and a joint vocabulary of 8000 tokens. This model was converted into the HuggingFace format and integrates a number of normalisation processes (e.g. removing double doubles, apostrophes and quotes, normalisation of different accent formats, lowercasing).
### Training
The model was initialised trained using Fairseq until convergence on the development set (according to our customised weighted accuracy measure - please see [the paper](https://hal.inria.fr/hal-03663110/file/LREC_2022___CCass_Inria-camera-ready.pdf) for more details). The model was then converted to HuggingFace and training continued to smooth out incoherences introduced during the conversion procedure (incompatibilities in the way the SentencePiece and NMT vocabularies are defined, linked to HuggingFace vocabularies being necessarily the same as the tokeniser vocabulary, a constraint that is not imposed in Fairseq).
### Evaluation results
Full results for the initial Fairseq models can be found in [the paper](https://hal.inria.fr/hal-03663110/file/LREC_2022___CCass_Inria-camera-ready.pdf).
Results on this converted model coming soon!
## BibTex entry and citation info
<a name="cite"></a>
If you use this work, please cite the following article:
Thibault Charmet, Inès Cherichi, Matthieu Allain, Urszula Czerwinska, Amaury Fouret, Benoît Sagot and Rachel Bawden, 2022. [**Complex Labelling and Similarity Prediction in Legal Texts: Automatic Analysis of France’s Court of Cassation Rulings**](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.509.pdf). In Proceedings of the 13th Language Resources and Evaluation Conference, Marseille, France.]
```
@inproceedings{charmet-et-al-2022-complex,
tite = {Complex Labelling and Similarity Prediction in Legal Texts: Automatic Analysis of France’s Court of Cassation Rulings},
author = {Charmet, Thibault and Cherichi, Inès and Allain, Matthieu and Czerwinska, Urszula and Fouret, Amaury, and Sagot, Benoît and Bawden, Rachel},
booktitle = {Proceedings of the 13th Language Resources and Evaluation Conference},
year = {2022},
address = {Marseille, France},
pages = {4754--4766},
url = {http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.509.pdf}
```
|
nbroad/splinter-base-squad2 | 128ad722d483ac3e436ad5e42ff8dddd31100a98 | 2022-06-04T03:47:06.000Z | [
"pytorch",
"tensorboard",
"splinter",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | nbroad | null | nbroad/splinter-base-squad2 | 13 | null | transformers | 10,320 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: splinter-base-squad2_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-base-squad2_3
This model is a fine-tuned version of [tau/splinter-base-qass](https://huggingface.co/tau/splinter-base-qass) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Anery/bert-finetuned-ner | 5b1cc1b214f040f781613ee945040026474d1eab | 2022-06-07T22:48:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Anery | null | Anery/bert-finetuned-ner | 13 | null | transformers | 10,321 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0244
- Precision: 0.7368
- Recall: 0.4
- F1: 0.5185
- Accuracy: 0.9919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 14 | 0.0598 | 0.0 | 0.0 | 0.0 | 0.9870 |
| No log | 2.0 | 28 | 0.0357 | 0.0 | 0.0 | 0.0 | 0.9894 |
| No log | 3.0 | 42 | 0.0256 | 0.75 | 0.2571 | 0.3830 | 0.9910 |
| No log | 4.0 | 56 | 0.0244 | 0.7368 | 0.4 | 0.5185 | 0.9919 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
amehta633/cifar-10-vgg-pretrained | 2e54558f39d76c7dada2c566610b4e31cbad47ae | 2022-06-08T04:01:09.000Z | [
"transformers",
"image-classification",
"pytorch"
]
| image-classification | false | amehta633 | null | amehta633/cifar-10-vgg-pretrained | 13 | null | transformers | 10,322 | ---
tags:
- image-classification
- pytorch
---
|
carblacac/twitter-sentiment-analysis | 639782f8a57a5bbc49d97e43940f601dff006fc3 | 2022-06-08T22:40:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:new_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | carblacac | null | carblacac/twitter-sentiment-analysis | 13 | null | transformers | 10,323 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- new_dataset
metrics:
- accuracy
model-index:
- name: sentiment-analysis-twitter
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: new_dataset
type: new_dataset
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7965
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-analysis-twitter
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the new_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4579
- Accuracy: 0.7965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5315 | 1.0 | 157 | 0.4517 | 0.788 |
| 0.388 | 2.0 | 314 | 0.4416 | 0.8 |
| 0.3307 | 3.0 | 471 | 0.4579 | 0.7965 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Marvin67/distil_covid | 617c3401d893d068725fd938396e64a0f062687b | 2022-06-09T00:44:16.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"license:other"
]
| text-classification | false | Marvin67 | null | Marvin67/distil_covid | 13 | null | transformers | 10,324 | ---
license: other
---
|
ghadeermobasher/WLT-SciBERT-NCBI | 88d06551985d3b4a1c7c08b5fb64f40b3120a8c6 | 2022-06-09T11:43:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/WLT-SciBERT-NCBI | 13 | null | transformers | 10,325 | Entry not found |
aspis/swin-finetuned-food101 | aad5a07687f7372495da39804ee4c21a9c374fc6 | 2022-06-28T11:02:36.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:food101",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| image-classification | false | aspis | null | aspis/swin-finetuned-food101 | 13 | null | transformers | 10,326 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: swin-finetuned-food101
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9210297029702971
- task:
type: image-classification
name: Image Classification
dataset:
name: food101
type: food101
config: default
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.9135841584158416
verified: true
- name: Precision Macro
type: precision
value: 0.9151645786633058
verified: true
- name: Precision Micro
type: precision
value: 0.9135841584158416
verified: true
- name: Precision Weighted
type: precision
value: 0.915164578663306
verified: true
- name: Recall Macro
type: recall
value: 0.9135841584158414
verified: true
- name: Recall Micro
type: recall
value: 0.9135841584158416
verified: true
- name: Recall Weighted
type: recall
value: 0.9135841584158416
verified: true
- name: F1 Macro
type: f1
value: 0.9138785016966742
verified: true
- name: F1 Micro
type: f1
value: 0.9135841584158415
verified: true
- name: F1 Weighted
type: f1
value: 0.9138785016966743
verified: true
- name: loss
type: loss
value: 0.30761435627937317
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-finetuned-food101
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2772
- Accuracy: 0.9210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5077 | 1.0 | 1183 | 0.3851 | 0.8893 |
| 0.3523 | 2.0 | 2366 | 0.3124 | 0.9088 |
| 0.1158 | 3.0 | 3549 | 0.2772 | 0.9210 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
biu-nlp/lingmess-coref | ea4e2faa2df18efbcdfeebd70865a72cbb5fee1e | 2022-06-29T11:48:40.000Z | [
"pytorch",
"longformer",
"en",
"arxiv:2205.12644",
"transformers",
"lingmess-coref-v1",
"license:mit"
]
| null | false | biu-nlp | null | biu-nlp/lingmess-coref | 13 | null | transformers | 10,327 | ---
language: en
tags: lingmess-coref-v1
license: mit
---
## LingMess: Linguistically Informed Multi Expert Scorers for Coreference Resolution
[LingMess](https://arxiv.org/abs/2205.12644) is a linguistically motivated categorization of mention-pairs into 6 types of coreference decisions and learn a dedicated trainable scoring function for each category. This significantly improves the accuracy of the pairwise scorer as well as of the overall coreference performance on the English Ontonotes coreference corpus.
Please check the [official repository](https://github.com/shon-otmazgin/lingmess-coref) for more details and updates.
#### Training on OntoNotes
We present the test results on OntoNotes 5.0 dataset.
| Model | Avg. F1 |
|---------------------------------|---------|
| SpanBERT-large + e2e | 79.6 |
| Longformer-large + s2e | 80.3 |
| **Longformer-large + LingMess** | 81.4 |
### Citation
If you find LingMess useful for your work, please cite the following paper:
``` latex
@misc{https://doi.org/10.48550/arxiv.2205.12644,
doi = {10.48550/ARXIV.2205.12644},
url = {https://arxiv.org/abs/2205.12644},
author = {Otmazgin, Shon and Cattan, Arie and Goldberg, Yoav},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {LingMess: Linguistically Informed Multi Expert Scorers for Coreference Resolution},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
speechbrain/asr-wav2vec2-dvoice-darija | ed08fb00905c304d6bc57a8a495120c5e25eb3b9 | 2022-06-10T00:58:04.000Z | [
"wav2vec2",
"feature-extraction",
"dar",
"dataset:Dvoice",
"speechbrain",
"CTC",
"pytorch",
"Transformer",
"license:apache-2.0",
"automatic-speech-recognition"
]
| automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-wav2vec2-dvoice-darija | 13 | null | speechbrain | 10,328 | ---
language: "dar"
thumbnail:
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- pytorch
- speechbrain
- Transformer
license: "apache-2.0"
datasets:
- Dvoice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# wav2vec 2.0 with CTC/Attention trained on DVoice Darija (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on a [DVoice](https://zenodo.org/record/6342622) Darija dataset within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
| DVoice Release | Val. CER | Val. WER | Test CER | Test WER |
|:-------------:|:---------------------------:| -----:| -----:| -----:|
| v2.0 | 5.51 | 18.46 | 5.85 | 18.28 |
# Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and is trained with the train transcriptions.
- Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on the Darija dataset.
The obtained final acoustic representation is given to the CTC greedy decoder.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
# Install SpeechBrain
First of all, please install transformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read the SpeechBrain tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
# Transcribing your own audio files (in Darija)
```python
from speechbrain.pretrained import EncoderASR
asr_model = EncoderASR.from_hparams(source="speechbrain/asr-wav2vec2-dvoice-darija", savedir="pretrained_models/asr-wav2vec2-dvoice-darija")
asr_model.transcribe_file('speechbrain/asr-wav2vec2-dvoice-darija/example_darija.wav')
```
# Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
# Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/DVoice/ASR/CTC
python train_with_wav2vec2.py hparams/train_dar_with_wav2vec.yaml --data_folder=/localscratch/darija/
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1vNT7RjRuELs7pumBHmfYsrOp9m46D0ym?usp=sharing).
# Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
# About DVoice
DVoice is a community initiative that aims to provide African low resources languages with data and models to facilitate their use of voice technologies. The lack of data on these languages makes it necessary to collect data using methods that are specific to each one. Two different approaches are currently used: the DVoice platforms ([https://dvoice.ma](https://dvoice.ma) and [https://dvoice.sn](https://dvoice.sn)), which are based on Mozilla Common Voice, for collecting authentic recordings from the community, and transfer learning techniques for automatically labeling recordings that are retrieved from social media. The DVoice platform currently manages 7 languages including Darija (Moroccan Arabic dialect) whose dataset appears on this version, Wolof, Mandingo, Serere, Pular, Diola, and Soninke.
For this project, AIOX Labs and the SI2M Laboratory are joining forces to build the future of technologies together.
# About AIOX Labs
Based in Rabat, London, and Paris, AIOX-Labs mobilizes artificial intelligence technologies to meet the business needs and data projects of companies.
- He is at the service of the growth of groups, the optimization of processes, or the improvement of the customer experience.
- AIOX-Labs is multi-sector, from fintech to industry, including retail and consumer goods.
- Business-ready data products with a solid algorithmic base and adaptability for the specific needs of each client.
- A complementary team made up of doctors in AI and business experts with a solid scientific base and international publications.
Website: [https://www.aiox-labs.com/](https://www.aiox-labs.com/)
# SI2M Laboratory
The Information Systems, Intelligent Systems, and Mathematical Modeling Research Laboratory (SI2M) is an academic research laboratory of the National Institute of Statistics and Applied Economics (INSEA). The research areas of the laboratories are Information Systems, Intelligent Systems, Artificial Intelligence, Decision Support, Network, and System Security, and Mathematical Modelling.
Website: [SI2M Laboratory](https://insea.ac.ma/index.php/pole-recherche/equipe-de-recherche/150-laboratoire-de-recherche-en-systemes-d-information-systemes-intelligents-et-modelisation-mathematique)
# About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain
# Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
# Acknowledgements
This research was supported through computational resources of HPC-MARWAN (www.marwan.ma/hpc) provided by CNRST, Rabat, Morocco. We deeply thank this institution.
|
zuu/automatic-speech-recognition | c5bd5e6c2ea9c24a25ad90d8aa623313c28c2bf1 | 2022-06-11T09:41:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | zuu | null | zuu/automatic-speech-recognition | 13 | null | transformers | 10,329 | Entry not found |
carblacac/bert-finetuned-ner | 4a874f8e71cd529fffc8fb5fec424ae7fe7f47f6 | 2022-06-14T10:07:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | carblacac | null | carblacac/bert-finetuned-ner | 13 | null | transformers | 10,330 | Entry not found |
ghadeermobasher/BC5CDR-Chem-Modified-PubMedBERT-384 | 976124862f8e127089c25d1f99b182ad76481690 | 2022-06-15T12:09:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chem-Modified-PubMedBERT-384 | 13 | null | transformers | 10,331 | Entry not found |
ghadeermobasher/BC4CHEMD-Chem-Modified-PubMedBERT-384 | a30e30d8ba963e0abe2aea4979652ab00a258ab0 | 2022-06-14T05:57:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD-Chem-Modified-PubMedBERT-384 | 13 | null | transformers | 10,332 | Entry not found |
ghadeermobasher/BC4CHEMD-Chem-Modified-BlueBERT-512 | 0dee08c36175ff1592b3924cca929b07551452f6 | 2022-06-14T09:35:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD-Chem-Modified-BlueBERT-512 | 13 | null | transformers | 10,333 | Entry not found |
ghadeermobasher/BC4CHEMD-Chem-Original-PubMedBERT-384 | a30a8fe5990252f86a965e8b950da568903a59a0 | 2022-06-14T06:33:40.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD-Chem-Original-PubMedBERT-384 | 13 | null | transformers | 10,334 | Entry not found |
ghadeermobasher/BC4CHEMD-Chem-Original-BioBERT-512 | 3c52d62c633fa135bf4a70836218b195ebf09c9e | 2022-06-14T10:03:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD-Chem-Original-BioBERT-512 | 13 | null | transformers | 10,335 | Entry not found |
ghadeermobasher/BC4CHEMD-Chem-Modified-SciBERT-384 | 4904d7209de2de9e5672f94152828121a983f82c | 2022-06-14T18:54:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD-Chem-Modified-SciBERT-384 | 13 | null | transformers | 10,336 | Entry not found |
eslamxm/xlmroberta-finetuned-fa | c1f6097c73d428cd542c18d49cbb3fa6c0e9b2ad | 2022-06-15T06:53:15.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:pn_summary",
"transformers",
"summarization",
"fa",
"xlmroberta",
"Abstractive Summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| summarization | false | eslamxm | null | eslamxm/xlmroberta-finetuned-fa | 13 | null | transformers | 10,337 | ---
tags:
- summarization
- fa
- xlmroberta
- Abstractive Summarization
- generated_from_trainer
datasets:
- pn_summary
model-index:
- name: xlmroberta-finetuned-fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta-finetuned-fa
This model is a fine-tuned version of [](https://huggingface.co/) on the pn_summary dataset.
It achieves the following results on the evaluation set:
- Loss: 8.2286
- Rouge-1: 4.99
- Rouge-2: 0.0
- Rouge-l: 4.99
- Gen Len: 20.0
- Bertscore: 51.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
mmeet611/finetuning-sentiment-model-3000-samples | 27ac771731fe1dcb304e84dad698ba0ef806298f | 2022-07-05T07:16:18.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | mmeet611 | null | mmeet611/finetuning-sentiment-model-3000-samples | 13 | null | transformers | 10,338 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8628762541806019
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3052
- Accuracy: 0.8633
- F1: 0.8629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
justpyschitry/Medical_Article_Classifier_by_ICD-11_Chapter | c350b711f6950912f759e70a5eebcd8f31f902cb | 2022-06-15T21:38:26.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:justpyschitry/autotrain-data-Psychiatry_Article_Identifier",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | justpyschitry | null | justpyschitry/Medical_Article_Classifier_by_ICD-11_Chapter | 13 | null | transformers | 10,339 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- justpyschitry/autotrain-data-Psychiatry_Article_Identifier
co2_eq_emissions: 0.021794705501614994
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 990132820
- CO2 Emissions (in grams): 0.021794705501614994
## Validation Metrics
- Loss: 0.3959168493747711
- Accuracy: 0.9141004862236629
- Macro F1: 0.8984327823035179
- Micro F1: 0.9141004862236629
- Weighted F1: 0.913962331636746
- Macro Precision: 0.9087151885944185
- Micro Precision: 0.9141004862236629
- Weighted Precision: 0.9154123644574501
- Macro Recall: 0.8957596627132517
- Micro Recall: 0.9141004862236629
- Weighted Recall: 0.9141004862236629
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/justpyschitry/autotrain-Psychiatry_Article_Identifier-990132820
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("justpyschitry/autotrain-Psychiatry_Article_Identifier-990132820", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("justpyschitry/autotrain-Psychiatry_Article_Identifier-990132820", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Akihiro2/akihiro2-finetuned-kde4-en-to-jp-accelerate | 014605485acc4e964d1ee8a8b2ae222ccdd38979 | 2022-06-17T08:24:32.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Akihiro2 | null | Akihiro2/akihiro2-finetuned-kde4-en-to-jp-accelerate | 13 | null | transformers | 10,340 | |
S2312dal/M1_cross | c0332f1e22740ae366f1dc615399d7337a8d72f3 | 2022-06-17T14:17:56.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | S2312dal | null | S2312dal/M1_cross | 13 | null | transformers | 10,341 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: M1_cross
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M1_cross
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0066
- Pearson: 0.9828
- Spearmanr: 0.9147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 25
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 125.0
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| 0.0294 | 1.0 | 131 | 0.0457 | 0.8770 | 0.8351 |
| 0.0237 | 2.0 | 262 | 0.0302 | 0.9335 | 0.8939 |
| 0.015 | 3.0 | 393 | 0.0155 | 0.9594 | 0.9054 |
| 0.0177 | 4.0 | 524 | 0.0106 | 0.9778 | 0.9091 |
| 0.0087 | 5.0 | 655 | 0.0066 | 0.9828 | 0.9147 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Nonzerophilip/bert-finetuned-ner_swedish_test_large_set | da1a65e5c3434959b6743e57eb1aec9b959895c0 | 2022-06-18T08:36:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:suc3",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Nonzerophilip | null | Nonzerophilip/bert-finetuned-ner_swedish_test_large_set | 13 | null | transformers | 10,342 | ---
tags:
- generated_from_trainer
datasets:
- suc3
model-index:
- name: bert-finetuned-ner_swedish_test_large_set
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner_swedish_test_large_set
This model is a fine-tuned version of [KBLab/bert-base-swedish-cased-ner](https://huggingface.co/KBLab/bert-base-swedish-cased-ner) on the suc3 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0265
- eval_precision: 0.8542
- eval_recall: 0.8468
- eval_f1: 0.8505
- eval_accuracy: 0.9919
- eval_runtime: 1076.8307
- eval_samples_per_second: 10.685
- eval_steps_per_second: 1.336
- epoch: 1.0
- step: 5754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.19.3
- Pytorch 1.7.1
- Datasets 2.2.2
- Tokenizers 0.12.1
|
philschmid/habana-xlm-r-large-amazon-massive | 3d761073603c1d60a140b163fe3e01f237c4ddc7 | 2022-06-24T13:38:20.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:AmazonScience/massive",
"transformers",
"generated_from_trainer",
"habana",
"license:apache-2.0"
]
| text-classification | false | philschmid | null | philschmid/habana-xlm-r-large-amazon-massive | 13 | null | transformers | 10,343 | ---
license: apache-2.0
tags:
- generated_from_trainer
- habana
datasets:
- AmazonScience/massive
metrics:
- accuracy
- f1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# philschmid/habana-xlm-r-large-amazon-massive
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the AmazonScience/massive dataset.
It achieves the following results on the evaluation set:
## 8x HPU approx. 41min
**train results**
```bash
{'loss': 0.2651, 'learning_rate': 2.4e-05, 'epoch': 1.0}
{'loss': 0.1079, 'learning_rate': 1.8e-05, 'epoch': 2.0}
{'loss': 0.0563, 'learning_rate': 1.2e-05, 'epoch': 3.0}
{'loss': 0.0308, 'learning_rate': 6e-06, 'epoch': 4.0}
{'loss': 0.0165, 'learning_rate': 0.0, 'epoch': 5.0}
```
total
```bash
{'train_runtime': 3172.4502, 'train_samples_per_second': 127.028, 'train_steps_per_second': 1.986, 'train_loss': 0.09531746031746031, 'epoch': 5.0}
```
**eval results**
```bash
{'eval_loss': 0.3128528892993927, 'eval_accuracy': 0.9125852013210597, 'eval_f1': 0.9125852013210597, 'eval_runtime': 45.1795, 'eval_samples_per_second': 314.988, 'eval_steps_per_second': 4.936, 'epoch': 1.0}
{'eval_loss': 0.36222779750823975, 'eval_accuracy': 0.9134987000210807, 'eval_f1': 0.9134987000210807, 'eval_runtime': 29.8241, 'eval_samples_per_second': 477.165, 'eval_steps_per_second': 7.477, 'epoch': 2.0}
{'eval_loss': 0.3943144679069519, 'eval_accuracy': 0.9140608530672476, 'eval_f1': 0.9140
608530672476, 'eval_runtime': 30.1085, 'eval_samples_per_second': 472.657, 'eval_steps_per_second': 7.407, 'epoch': 3.0}
{'eval_loss': 0.40938863158226013, 'eval_accuracy': 0.9158878504672897, 'eval_f1': 0.9158878504672897, 'eval_runtime': 30.4546, 'eval_samples_per_second': 467.286, 'eval_steps_per_second': 7.322, 'epoch': 4.0}
{'eval_loss': 0.4137658476829529, 'eval_accuracy': 0.9172932330827067, 'eval_f1': 0.9172932330827067, 'eval_runtime': 30.3464, 'eval_samples_per_second': 468.952, 'eval_steps_per_second': 7.348, 'epoch': 5.0}
```
# Environment
The training was run on a `DL1` instance on AWS using Habana Gaudi1 and `optimum`.
see for more information: https://github.com/philschmid/deep-learning-habana-huggingface
|
KoichiYasuoka/bert-base-japanese-wikipedia-ud-head | 0da626d8f4bdd4a90aa033598caf1337644dbb1c | 2022-07-20T03:51:44.000Z | [
"pytorch",
"bert",
"question-answering",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"wikipedia",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| question-answering | false | KoichiYasuoka | null | KoichiYasuoka/bert-base-japanese-wikipedia-ud-head | 13 | null | transformers | 10,344 | ---
language:
- "ja"
tags:
- "japanese"
- "wikipedia"
- "question-answering"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "question-answering"
widget:
- text: "国語"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "教科書"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "の"
context: "全学年にわたって小学校の国語[MASK]教科書に挿し絵が用いられている"
---
# bert-base-japanese-wikipedia-ud-head
## Model Description
This is a BERT model pretrained on Japanese Wikipedia texts for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [bert-base-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-base-japanese-char-extended) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-wikipedia-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/bert-base-japanese-wikipedia-ud-head")
qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model)
print(qap(question="国語",context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.file_utils import hf_bucket_url
c=AutoConfig.from_pretrained(hf_bucket_url(bert,"deprel/config.json"))
d=x(hf_bucket_url(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(hf_bucket_url(bert,"tagger/config.json"))
t=x(hf_bucket_url(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/bert-base-japanese-wikipedia-ud-head")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
|
Sayan01/tiny-bert-cola-distilled | 8a459f9c77912b2319441cdeb802d5b5d9d3b7a5 | 2022-07-14T07:33:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Sayan01 | null | Sayan01/tiny-bert-cola-distilled | 13 | null | transformers | 10,345 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-40 | b0527926225e9a8d00e06f54c878dd4582f8ca9e | 2022-06-21T13:28:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-40 | 13 | null | transformers | 10,346 | Entry not found |
davidcechak/DNADeberta_finedemo_coding_vs_intergenomic_seqs | 258eba22d3132cda046b2a67c4350cdaa2ee1a6c | 2022-06-22T08:19:10.000Z | [
"pytorch",
"deberta",
"text-classification",
"transformers"
]
| text-classification | false | davidcechak | null | davidcechak/DNADeberta_finedemo_coding_vs_intergenomic_seqs | 13 | null | transformers | 10,347 | Entry not found |
Zamachi/albert-for-multilabel-sentence-classification | f47d0ebb3c59a2128f3427d698e2ad34bbfb2c7e | 2022-07-14T13:49:59.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | Zamachi | null | Zamachi/albert-for-multilabel-sentence-classification | 13 | null | transformers | 10,348 | Entry not found |
Sayan01/tiny-bert-qnli-distilled | 3d5b3dcc80b599a6dcde60d4193937c1a61a6b8d | 2022-07-15T17:47:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Sayan01 | null | Sayan01/tiny-bert-qnli-distilled | 13 | null | transformers | 10,349 | Entry not found |
Hermite/DialoGPT-large-hermite3 | 65dbba4ec74581c8f9f797e144ef952e77cd8a85 | 2022-06-23T15:55:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Hermite | null | Hermite/DialoGPT-large-hermite3 | 13 | null | transformers | 10,350 | ---
tags:
- conversational
---
# Hermite DialoGPT Model |
AlekseyKorshuk/books-long-model | 43a70085a37c8886fa1d4baae679efaa97372d9e | 2022-06-24T10:27:51.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
]
| text-generation | false | AlekseyKorshuk | null | AlekseyKorshuk/books-long-model | 13 | 1 | transformers | 10,351 | Entry not found |
domenicrosati/deberta-v3-large-finetuned-DAGPap22 | 9299f200908c80e9b33ba1029bcfd26b2364b05b | 2022-06-25T15:42:46.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | domenicrosati | null | domenicrosati/deberta-v3-large-finetuned-DAGPap22 | 13 | null | transformers | 10,352 | ---
license: mit
tags:
- text-classification
- generated_from_trainer
model-index:
- name: deberta-v3-large-finetuned-DAGPap22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-finetuned-DAGPap22
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 20
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
crystina-z/canine-c-mmarco-all.epoch-2 | 1936e693625b554c9bfbe551e0ee5b3cd4bda6e3 | 2022-06-25T18:06:07.000Z | [
"pytorch",
"canine",
"feature-extraction",
"transformers"
]
| feature-extraction | false | crystina-z | null | crystina-z/canine-c-mmarco-all.epoch-2 | 13 | null | transformers | 10,353 | Entry not found |
canlinzhang/bert-finetuned-ner | 4dd7719f294d2d96028e12c594878ad2d2036ec3 | 2022-06-26T04:43:18.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | canlinzhang | null | canlinzhang/bert-finetuned-ner | 13 | null | transformers | 10,354 | Entry not found |
OptimalHoiboy/DialoGPT-small-kasumai | b905b45aebf5c56b70d129be59508ebcdb556769 | 2022-06-27T18:06:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | OptimalHoiboy | null | OptimalHoiboy/DialoGPT-small-kasumai | 13 | null | transformers | 10,355 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
nvidia/stt_de_conformer_transducer_large | 8ba02a07bb4d2ce404bc0f299c42f711bec4f340 | 2022-07-27T17:58:21.000Z | [
"nemo",
"de",
"dataset:VoxPopuli (DE)",
"dataset:multilingual_librispeech",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2005.08100",
"automatic-speech-recognition",
"speech",
"audio",
"CTC",
"Conformer",
"Transformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"license:cc-by-4.0",
"model-index"
]
| automatic-speech-recognition | false | nvidia | null | nvidia/stt_de_conformer_transducer_large | 13 | 2 | nemo | 10,356 | ---
language:
- de
library_name: nemo
datasets:
- VoxPopuli (DE)
- multilingual_librispeech
- mozilla-foundation/common_voice_7_0
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- Conformer
- Transformer
- pytorch
- NeMo
- hf-asr-leaderboard
license: cc-by-4.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: stt_de_conformer_transducer_large
results:
- task:
type: Automatic Speech Recognition
name: speech-recognition
dataset:
name: common-voice-7-0
type: mozilla-foundation/common_voice_7_0
config: de
split: test
args:
language: de
metrics:
- name: Test WER
type: wer
value: 4.93
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Multilingual LibriSpeech
type: facebook/multilingual_librispeech
config: german
split: test
args:
language: de
metrics:
- name: Test WER
type: wer
value: 3.85
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Vox Populi
type: polinaeterna/voxpopuli
args:
language: de
metrics:
- name: Test WER
type: wer
value: 5.70
---
# NVIDIA Conformer-Transducer Large (de)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
This model transcribes speech in lower case German alphabet along with spaces.
It is a "large" versions of Conformer-Transducer (around 120M parameters) model.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer) for complete architecture details.
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_de_conformer_transducer_large")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_de_conformer_transducer_large"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
Conformer-Transducer model is an autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses Transducer loss/decoding instead of CTC Loss. You may find more info on the detail of this model here: [Conformer-Transducer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_ctc_bpe.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
### Datasets
All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of several thousand hours of German speech:
- VoxPopuli (DE) 200 hrs subset
- Multilingual Librispeech (MLS DE) - 1500 hrs subset
- Mozilla Common Voice (v7.0)
Note: older versions of the model may have trained on smaller set of datasets.
## Performance
The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
| Version | Tokenizer | Vocabulary Size | MCV7.0 dev | MCV7.0 test | MLS dev | MLS test | Voxpopuli dev | Voxpopuli test |
|---------|-----------------------|-----------------|---------------|---------------|------------|-----------|------------|----------------|
| 1.6.0 | SentencePiece Unigram | 1024 | 4.40 | 4.93 | 3.22 | 3.85 | 11.04 | 8.85 |
## Limitations
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## NVIDIA Riva: Deployment
[NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support
Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva).
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
[1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license. |
Vanmas/bert-finetuned-ner | 175d9969490207934e3d9fea1d0701efff74bd7c | 2022-06-28T08:11:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Vanmas | null | Vanmas/bert-finetuned-ner | 13 | null | transformers | 10,357 | Entry not found |
pserna/bert2bert-spanish-paraphraser | 064965d0ae5cb17a68d1c62b6a0c925f05403c88 | 2022-07-04T15:15:38.000Z | [
"pytorch",
"tf",
"encoder-decoder",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | pserna | null | pserna/bert2bert-spanish-paraphraser | 13 | null | transformers | 10,358 | ---
license: apache-2.0
---
# Spanish Bert2Bert fine-tuned on Quora question pairs dataset
Fine-tuning of a [question generator model](https://huggingface.co/mrm8488/bert2bert-spanish-question-generation) into a paraphraser model using a poor-man's translation of the Quora question pairs dataset. It basically rephrases questions into similar questions. Non interrogative sentences are not handled very well.
- Original models: [mrm8488/bert2bert-spanish-question-generation](https://huggingface.co/mrm8488/bert2bert-spanish-question-generation?text=Manuel+vive+en+Murcia%2C+Espa%C3%B1a), which is based on [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) (?).
- Custom database: "Poor-man's" translation of duplicated questions in Quora (translated with [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es))
|
czearing/story-to-title | db7460f0c49d8dc46fcde87dba3f73fde5216150 | 2022-06-28T22:43:26.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | czearing | null | czearing/story-to-title | 13 | 1 | transformers | 10,359 | ---
license: mit
---
## Story to Title
The model is based on the T5 language model and trained using a large collection of movie descriptions and corresponding titles. When given a story it will generate a corresponding title.
## Usage
Example code:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("czearing/story-to-title")
model = AutoModel.from_pretrained("czearing/czearing/story-to-title")
```
## License
MIT
|
Salvatore/bert-finetuned-mutation-recognition-2 | 564465100183c5dc75ee4534b73b24c9c8ac96cd | 2022-06-29T14:29:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Salvatore | null | Salvatore/bert-finetuned-mutation-recognition-2 | 13 | null | transformers | 10,360 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-mutation-recognition-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mutation-recognition-2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0818
- Dnamutation F1: 0.6371
- Snp F1: 0.0952
- Proteinmutation F1: 0.8412
- Precision: 0.7646
- Recall: 0.6596
- F1: 0.7082
- Accuracy: 0.9877
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Dnamutation F1 | Snp F1 | Proteinmutation F1 | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:------:|:------------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 403 | 0.0383 | 0.5871 | 0.0 | 0.7573 | 0.6195 | 0.6770 | 0.6470 | 0.9872 |
| 0.0863 | 2.0 | 806 | 0.0349 | 0.6202 | 0.0 | 0.8646 | 0.6815 | 0.7408 | 0.7099 | 0.9889 |
| 0.0295 | 3.0 | 1209 | 0.0415 | 0.5670 | 0.0 | 0.7689 | 0.6887 | 0.6035 | 0.6433 | 0.9866 |
| 0.019 | 4.0 | 1612 | 0.0430 | 0.5909 | 0.4742 | 0.7840 | 0.6667 | 0.6615 | 0.6641 | 0.9881 |
| 0.0127 | 5.0 | 2015 | 0.0507 | 0.6345 | 0.0 | 0.8455 | 0.7290 | 0.6867 | 0.7072 | 0.9885 |
| 0.0127 | 6.0 | 2418 | 0.0678 | 0.5946 | 0.05 | 0.8087 | 0.7471 | 0.6170 | 0.6758 | 0.9868 |
| 0.0067 | 7.0 | 2821 | 0.0544 | 0.6693 | 0.2727 | 0.8475 | 0.7208 | 0.7292 | 0.725 | 0.9884 |
| 0.0042 | 8.0 | 3224 | 0.0642 | 0.6694 | 0.2000 | 0.8401 | 0.7390 | 0.7118 | 0.7251 | 0.9885 |
| 0.0019 | 9.0 | 3627 | 0.0847 | 0.6271 | 0.0976 | 0.8416 | 0.7671 | 0.6499 | 0.7037 | 0.9877 |
| 0.0014 | 10.0 | 4030 | 0.0818 | 0.6371 | 0.0952 | 0.8412 | 0.7646 | 0.6596 | 0.7082 | 0.9877 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 2.0.0
- Tokenizers 0.12.1
|
sarahmiller137/bioclinical-bert-ft-m3-lc | b7d12474813b5215463a104ed899e34beb121010 | 2022-07-05T16:26:56.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:MIMIC-III ",
"transformers",
"text classification",
"license:cc"
]
| text-classification | false | sarahmiller137 | null | sarahmiller137/bioclinical-bert-ft-m3-lc | 13 | null | transformers | 10,361 | ---
language:
- en
thumbnail: "url to a thumbnail used in social sharing"
tags:
- 'text classification'
license: cc
datasets:
- MIMIC-III
---
## Model information:
This model is the [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) model that has been finetuned using radiology report texts from the MIMIC-III database. The task performed was text classification in order to benchmark this model with a selection of other variants of BERT for the classifcation of MIMIC-III radiology report texts into two classes. Labels of [0,1] were assigned to radiology reports in MIMIC-III that were linked to an ICD9 diagnosis code for lung cancer = 1 and a random sample of reports which were not linked to any type of cancer diagnosis code at all = 0.
## Intended uses:
This model is intended to be used to classify texts to identify the presence of lung cancer. The model will predict lables of [0,1].
## Limitations:
Note that the dataset and model may not be fully represetative or suitable for all needs it is recommended that the paper for the dataset and the base model card should be reviewed before use -
- [MIMIC-III](https://www.nature.com/articles/sdata201635.pdf)
- [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT)
## How to use:
Load the model from the library using the following checkpoints:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sarahmiller137/bioclinical-bert-ft-m3-lc")
model = AutoModel.from_pretrained("sarahmiller137/bioclinical-bert-ft-m3-lc")
```
|
Jeevesh8/goog_bert_ft_cola-49 | 33393de4a88da2fb43020f9c81c6dbba538530f1 | 2022-06-29T17:34:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-49 | 13 | null | transformers | 10,362 | Entry not found |
bayartsogt/roberta-base-ner | f4b56cf78a7c93b6a92936d3ee5d1866453016b1 | 2022-07-01T01:51:15.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"mn",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | bayartsogt | null | bayartsogt/roberta-base-ner | 13 | null | transformers | 10,363 | ---
language:
- mn
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ner
This model is a fine-tuned version of [bayartsogt/mongolian-roberta-base](https://huggingface.co/bayartsogt/mongolian-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1328
- Precision: 0.9248
- Recall: 0.9325
- F1: 0.9286
- Accuracy: 0.9805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.17 | 1.0 | 477 | 0.0823 | 0.8652 | 0.9001 | 0.8823 | 0.9739 |
| 0.0567 | 2.0 | 954 | 0.0883 | 0.9070 | 0.9296 | 0.9182 | 0.9778 |
| 0.0278 | 3.0 | 1431 | 0.0904 | 0.9165 | 0.9302 | 0.9233 | 0.9789 |
| 0.0158 | 4.0 | 1908 | 0.0945 | 0.9220 | 0.9301 | 0.9260 | 0.9798 |
| 0.0089 | 5.0 | 2385 | 0.1118 | 0.9227 | 0.9287 | 0.9257 | 0.9799 |
| 0.0061 | 6.0 | 2862 | 0.1154 | 0.9212 | 0.9309 | 0.9260 | 0.9803 |
| 0.0037 | 7.0 | 3339 | 0.1240 | 0.9253 | 0.9320 | 0.9286 | 0.9806 |
| 0.0023 | 8.0 | 3816 | 0.1293 | 0.9232 | 0.9316 | 0.9274 | 0.9803 |
| 0.0013 | 9.0 | 4293 | 0.1323 | 0.9253 | 0.9332 | 0.9292 | 0.9806 |
| 0.0012 | 10.0 | 4770 | 0.1328 | 0.9248 | 0.9325 | 0.9286 | 0.9805 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mousaazari/t5-text2sql | edde028d1cc3769f80c9370f0f13c82e604d7022 | 2022-07-22T14:19:15.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | mousaazari | null | mousaazari/t5-text2sql | 13 | null | transformers | 10,364 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-text2sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-text2sql
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1528
- Rouge2 Precision: 0.9252
- Rouge2 Recall: 0.4354
- Rouge2 Fmeasure: 0.5687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| No log | 1.0 | 11 | 2.7311 | 0.0907 | 0.0278 | 0.0409 |
| No log | 2.0 | 22 | 1.9749 | 0.0948 | 0.0281 | 0.0417 |
| No log | 3.0 | 33 | 1.4801 | 0.0998 | 0.0281 | 0.0428 |
| No log | 4.0 | 44 | 1.0439 | 0.0928 | 0.0266 | 0.0405 |
| No log | 5.0 | 55 | 0.7436 | 0.2758 | 0.1199 | 0.1633 |
| No log | 6.0 | 66 | 0.5619 | 0.6723 | 0.3182 | 0.4184 |
| No log | 7.0 | 77 | 0.4470 | 0.6655 | 0.3125 | 0.4093 |
| No log | 8.0 | 88 | 0.3851 | 0.762 | 0.3384 | 0.4508 |
| No log | 9.0 | 99 | 0.3372 | 0.7611 | 0.33 | 0.443 |
| No log | 10.0 | 110 | 0.3113 | 0.7754 | 0.3396 | 0.454 |
| No log | 11.0 | 121 | 0.2832 | 0.7977 | 0.3486 | 0.4682 |
| No log | 12.0 | 132 | 0.2703 | 0.8346 | 0.3786 | 0.5019 |
| No log | 13.0 | 143 | 0.2519 | 0.8379 | 0.3849 | 0.5058 |
| No log | 14.0 | 154 | 0.2411 | 0.856 | 0.3883 | 0.5116 |
| No log | 15.0 | 165 | 0.2274 | 0.8701 | 0.4023 | 0.5275 |
| No log | 16.0 | 176 | 0.2117 | 0.8773 | 0.4049 | 0.5312 |
| No log | 17.0 | 187 | 0.2061 | 0.8841 | 0.4015 | 0.5296 |
| No log | 18.0 | 198 | 0.1957 | 0.8894 | 0.4059 | 0.5349 |
| No log | 19.0 | 209 | 0.1859 | 0.9125 | 0.4274 | 0.5584 |
| No log | 20.0 | 220 | 0.1866 | 0.8914 | 0.4097 | 0.5385 |
| No log | 21.0 | 231 | 0.1846 | 0.8957 | 0.4128 | 0.5423 |
| No log | 22.0 | 242 | 0.1797 | 0.9252 | 0.4354 | 0.5687 |
| No log | 23.0 | 253 | 0.1730 | 0.9252 | 0.4354 | 0.5687 |
| No log | 24.0 | 264 | 0.1645 | 0.9252 | 0.4354 | 0.5687 |
| No log | 25.0 | 275 | 0.1612 | 0.9252 | 0.4354 | 0.5687 |
| No log | 26.0 | 286 | 0.1599 | 0.9252 | 0.4354 | 0.5687 |
| No log | 27.0 | 297 | 0.1570 | 0.9252 | 0.4354 | 0.5687 |
| No log | 28.0 | 308 | 0.1550 | 0.9252 | 0.4354 | 0.5687 |
| No log | 29.0 | 319 | 0.1544 | 0.9252 | 0.4354 | 0.5687 |
| No log | 30.0 | 330 | 0.1534 | 0.9252 | 0.4354 | 0.5687 |
| No log | 31.0 | 341 | 0.1529 | 0.9252 | 0.4354 | 0.5687 |
| No log | 32.0 | 352 | 0.1528 | 0.9252 | 0.4354 | 0.5687 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
8X7K/anime-sentiment-analysis | 0a948e7ca1a3b20c802c9c5a4283c9ab1774556c | 2022-07-04T06:29:50.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | 8X7K | null | 8X7K/anime-sentiment-analysis | 13 | null | transformers | 10,365 | Entry not found |
samuelrince/bert-base-cased-finetuned-panx-en | eb4c79f53c89ea8db066f8f2de1f3ec80ebf443a | 2022-07-04T20:08:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | samuelrince | null | samuelrince/bert-base-cased-finetuned-panx-en | 13 | null | transformers | 10,366 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xtreme
model-index:
- name: bert-base-cased-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-panx-en
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2941 | 1.0 | 1250 | 0.2432 |
| 0.186 | 2.0 | 2500 | 0.2214 |
| 0.1387 | 3.0 | 3750 | 0.2478 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dee4hf/autotrain-deephate2-1093539673 | 5f48e340923213e6c8893056ecc6b7cea20c7554 | 2022-07-06T04:28:59.000Z | [
"pytorch",
"albert",
"text-classification",
"bn",
"dataset:dee4hf/autotrain-data-deephate2",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | dee4hf | null | dee4hf/autotrain-deephate2-1093539673 | 13 | null | transformers | 10,367 | ---
tags: autotrain
language: bn
widget:
- text: "I love AutoTrain 🤗"
datasets:
- dee4hf/autotrain-data-deephate2
co2_eq_emissions: 7.663051290039914
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1093539673
- CO2 Emissions (in grams): 7.663051290039914
## Validation Metrics
- Loss: 0.34404119849205017
- Accuracy: 0.8843120070113936
- Macro F1: 0.8771237753798016
- Micro F1: 0.8843120070113936
- Weighted F1: 0.8843498914288083
- Macro Precision: 0.8745249813256932
- Micro Precision: 0.8843120070113936
- Weighted Precision: 0.8854719661321065
- Macro Recall: 0.8812563739901838
- Micro Recall: 0.8843120070113936
- Weighted Recall: 0.8843120070113936
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/dee4hf/autotrain-deephate2-1093539673
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dee4hf/autotrain-deephate2-1093539673", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dee4hf/autotrain-deephate2-1093539673", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
pollner/finetuning-sentiment-model-3000-samples | cd8f74b20e55d26f25cca9cb1b6d7c789351f252 | 2022-07-06T07:56:52.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | pollner | null | pollner/finetuning-sentiment-model-3000-samples | 13 | null | transformers | 10,368 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.877887788778878
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3183
- Accuracy: 0.8767
- F1: 0.8779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
chiendvhust/roberta-base-finetuned-squad | bab255ced06152a40a3d31917c2c45b9e64a06b3 | 2022-07-06T12:24:17.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| question-answering | false | chiendvhust | null | chiendvhust/roberta-base-finetuned-squad | 13 | null | transformers | 10,369 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Mascariddu8/bert-finetuned-ner | 68a9982c91d021d7042847025d3403413ee09c24 | 2022-07-07T14:36:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Mascariddu8 | null | Mascariddu8/bert-finetuned-ner | 13 | null | transformers | 10,370 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9357296670531721
- name: Recall
type: recall
value: 0.9506900033658701
- name: F1
type: f1
value: 0.9431505133984472
- name: Accuracy
type: accuracy
value: 0.9857390946017542
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0639
- Precision: 0.9357
- Recall: 0.9507
- F1: 0.9432
- Accuracy: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0847 | 1.0 | 1756 | 0.0636 | 0.9150 | 0.9387 | 0.9267 | 0.9840 |
| 0.0399 | 2.0 | 3512 | 0.0592 | 0.9302 | 0.9485 | 0.9393 | 0.9854 |
| 0.0201 | 3.0 | 5268 | 0.0639 | 0.9357 | 0.9507 | 0.9432 | 0.9857 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
rosicast/wav2vec2-large-xlsr-korean-zeroth | 7f11ae09806098f5951cb155db303b8b03b47d2b | 2022-07-08T08:06:57.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"korean",
"dataset:kresnik/zeroth_korean",
"transformers",
"Automatic Speech Recognition",
"wav2vec2-large-xlsr",
"speech",
"license:apache-2.0"
]
| automatic-speech-recognition | false | rosicast | null | rosicast/wav2vec2-large-xlsr-korean-zeroth | 13 | null | transformers | 10,371 | ---
license:
- apache-2.0
language:
- korean
tags:
- korean
- Automatic Speech Recognition
- automatic-speech-recognition
- wav2vec2
- wav2vec2-large-xlsr
- speech
datasets:
- kresnik/zeroth_korean
---
# Alert> The model is on the training process
# Model description
Check [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
# Intended uses & limitations
Automatic Speech Recognition
The model only trained on Zeroth-Korean corpus. There are 51.6 hours transcribed Korean audio for training data (22,263 utterances, 105 people, 3000 sentences) and 1.2 hours transcribed Korean audio for testing data (457 utterances, 10 people). [link](https://www.openslr.org/40/) check detail about data in the link.
# How to use
<code>
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset
import soundfile as sf
import torch
from jiwer import wer
processor = Wav2Vec2Processor.from_pretrained("rosicast/wav2vec2-large-xlsr-korean-zeroth")
model = Wav2Vec2ForCTC.from_pretrained("rosicast/wav2vec2-large-xlsr-korean-zeroth").to('cuda')
ds = load_dataset("kresnik/zeroth_korean", "clean")
test_ds = ds['test']
</code>
# Limitations and bias
# Evaluation results
Will be update after finish the training. |
josh-oo/bert-to-gpt2-german-to-easy-german-wiki | 05cc57552b964d837ac88d479c16158d1b209b3e | 2022-07-07T20:47:24.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | josh-oo | null | josh-oo/bert-to-gpt2-german-to-easy-german-wiki | 13 | null | transformers | 10,372 | Entry not found |
casasdorjunior/t5-small-finetuned-xlsum | 0a21383954fe2f093f4cf0ed1f190cbc2af9fc6b | 2022-07-10T08:50:55.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xlsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | casasdorjunior | null | casasdorjunior/t5-small-finetuned-xlsum | 13 | null | transformers | 10,373 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xlsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
args: spanish
metrics:
- name: Rouge1
type: rouge
value: 15.4289
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xlsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6974
- Rouge1: 15.4289
- Rouge2: 3.146
- Rougel: 12.7682
- Rougelsum: 12.912
- Gen Len: 18.9889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.9764 | 1.0 | 2382 | 2.6974 | 15.4289 | 3.146 | 12.7682 | 12.912 | 18.9889 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
danielreales00/fine-tuned-ai-ss-hs-01 | 3ed42a17757d772d0b3c46dc4a0244d71d0356ce | 2022-07-11T02:50:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | danielreales00 | null | danielreales00/fine-tuned-ai-ss-hs-01 | 13 | null | transformers | 10,374 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: fine-tuned-ai-ss-hs-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-ai-ss-hs-01
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- AUC: 0.88609
- Precision: 0.8514
- Accuracy: 0.8101
- F1: 0.7875
- Recall: 0.7326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.1207606211860595e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 357 | 1.0285 | 0.6955 | 0.5657 | 0.8987 | 0.4128 |
| 0.5857 | 2.0 | 714 | 1.0350 | 0.7207 | 0.6296 | 0.8673 | 0.4942 |
| 0.51 | 3.0 | 1071 | 0.7467 | 0.8156 | 0.7975 | 0.8442 | 0.7558 |
| 0.51 | 4.0 | 1428 | 0.8376 | 0.8101 | 0.7875 | 0.8514 | 0.7326 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
MiguelCosta/finetuning-sentiment-model-24000-samples | b815ab52ba004d7766fb43087310e995756e732c | 2022-07-12T10:48:14.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | MiguelCosta | null | MiguelCosta/finetuning-sentiment-model-24000-samples | 13 | null | transformers | 10,375 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-24000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9266666666666666
- name: F1
type: f1
value: 0.9273927392739274
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-24000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3505
- Accuracy: 0.9267
- F1: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Khoa/t5-small-finetuned-xsum | 306ad07a7570e6e35b716a4fbd4cb9b738e3efa7 | 2022-07-12T17:05:24.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Khoa | null | Khoa/t5-small-finetuned-xsum | 13 | null | transformers | 10,376 | Entry not found |
Team-PIXEL/pixel-base-finetuned-pos-ud-vietnamese-vtb | 4e9f2bad3dec8bf076f2bc3f6521e9054cd1bab4 | 2022-07-13T01:34:08.000Z | [
"pytorch",
"pixel",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Team-PIXEL | null | Team-PIXEL/pixel-base-finetuned-pos-ud-vietnamese-vtb | 13 | null | transformers | 10,377 | Entry not found |
xliu128/distilbert-base-uncased-finetuned-clinc | e067720b347954c0bcbdb0b0a86156291e84e6c1 | 2022-07-13T02:30:34.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | xliu128 | null | xliu128/distilbert-base-uncased-finetuned-clinc | 13 | null | transformers | 10,378 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9183870967741935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2891 | 0.7429 |
| 3.7868 | 2.0 | 636 | 1.8755 | 0.8374 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6928 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9184 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
morenolq/thext-bio-scibert | 77822575f8ff6d7fc5a2f6793d290b5dc775bcba | 2022-07-13T17:00:40.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"regression"
]
| text-classification | false | morenolq | null | morenolq/thext-bio-scibert | 13 | null | transformers | 10,379 | ---
language: "en"
tags:
- bert
- regression
- pytorch
pipeline:
- text-classification
widget:
- text: "We propose a new approach, based on Transformer-based encoding, to highlight extraction. To the best of our knowledge, this is the first attempt to use transformer architectures to address automatic highlight generation. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
- text: "We design a context-aware sentence-level regressor, in which the semantic similarity between candidate sentences and highlights is estimated by also attending the contextual knowledge provided by the other paper sections. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
- text: "Fig. 2, Fig. 3, Fig. 4 show the effect of varying the number K of selected highlights on the extraction performance. As expected, recall values increase while increasing the number of selected highlights, whereas precision values show an opposite trend. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
---
# General Information
This model is trained on journal publications of belonging to the domain: **Biology and Medicine**.
This is an `allenai/scibert_scivocab_cased` model trained in the scientific domain. The model is trained with regression objective to estimate the relevance of a sentence according to the provided context (e.g., the abstract of the scientific paper).
The model is used in the paper 'Transformer-based highlights extraction from scientific papers' published in Knowledge-Based Systems scientific journal.
The model is able to achieve state-of-the-art performance in the task of highlights extraction from scientific papers.
Access to the full paper: [here](https://doi.org/10.1016/j.knosys.2022.109382).
# Usage:
For detailed usage please use the official repository https://github.com/MorenoLaQuatra/THExt .
# References:
If you find it useful, please cite the following paper:
```bibtex
@article{thext,
title={Transformer-based highlights extraction from scientific papers},
author={La Quatra, Moreno and Cagliero, Luca},
journal={Knowledge-Based Systems},
pages={109382},
year={2022},
publisher={Elsevier}
}
``` |
jordyvl/biobert-base-cased-v1.2_ncbi_disease-softmax-labelall-ner | 21391c5827c5d90053db6d948beea6a602151da2 | 2022-07-13T09:05:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:ncbi_disease",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | jordyvl | null | jordyvl/biobert-base-cased-v1.2_ncbi_disease-softmax-labelall-ner | 13 | null | transformers | 10,380 | ---
tags:
- generated_from_trainer
datasets:
- ncbi_disease
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2_ncbi_disease-softmax-labelall-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: ncbi_disease
type: ncbi_disease
args: ncbi_disease
metrics:
- name: Precision
type: precision
value: 0.8288508557457213
- name: Recall
type: recall
value: 0.8614993646759848
- name: F1
type: f1
value: 0.8448598130841122
- name: Accuracy
type: accuracy
value: 0.9861487755016897
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2_ncbi_disease-softmax-labelall-ner
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the ncbi_disease dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0629
- Precision: 0.8289
- Recall: 0.8615
- F1: 0.8449
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0554 | 1.0 | 1359 | 0.0659 | 0.7814 | 0.8132 | 0.7970 | 0.9825 |
| 0.0297 | 2.0 | 2718 | 0.0445 | 0.8284 | 0.8895 | 0.8578 | 0.9876 |
| 0.0075 | 3.0 | 4077 | 0.0629 | 0.8289 | 0.8615 | 0.8449 | 0.9861 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Yvanzhu/E2E-NLG-Bart-best | 804565271f7c8008746d070ce1dd0181295653b8 | 2022-07-13T11:08:26.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Yvanzhu | null | Yvanzhu/E2E-NLG-Bart-best | 13 | null | transformers | 10,381 | Entry not found |
roscazo/BNE-conv-v1 | 4a9fea91227c1996a64a2d81e103adddbbddd07e | 2022-07-18T10:57:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | roscazo | null | roscazo/BNE-conv-v1 | 13 | null | transformers | 10,382 | Entry not found |
Bistolero/mt5_32b_DP_3 | 0b40ff9b1efc994add7c927d59bd3c26684e65a6 | 2022-07-13T17:05:48.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Bistolero | null | Bistolero/mt5_32b_DP_3 | 13 | null | transformers | 10,383 | Entry not found |
domenicrosati/SPECTER-finetuned-DAGPap22 | 44278b7ebdf40d98507dc29112c15d854170be9a | 2022-07-13T18:53:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | domenicrosati | null | domenicrosati/SPECTER-finetuned-DAGPap22 | 13 | null | transformers | 10,384 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: SPECTER-finetuned-DAGPap22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SPECTER-finetuned-DAGPap22
This model is a fine-tuned version of [allenai/specter](https://huggingface.co/allenai/specter) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0023
- Accuracy: 0.9993
- F1: 0.9995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.3422 | 1.0 | 669 | 0.4135 | 0.8914 | 0.9140 |
| 0.1074 | 2.0 | 1338 | 0.1216 | 0.9746 | 0.9811 |
| 0.0329 | 3.0 | 2007 | 0.0064 | 0.9989 | 0.9992 |
| 0.0097 | 4.0 | 2676 | 0.0132 | 0.9972 | 0.9980 |
| 0.0123 | 5.0 | 3345 | 0.0231 | 0.9961 | 0.9971 |
| 0.0114 | 6.0 | 4014 | 0.0080 | 0.9985 | 0.9989 |
| 0.0029 | 7.0 | 4683 | 0.2207 | 0.9727 | 0.9797 |
| 0.0075 | 8.0 | 5352 | 0.0145 | 0.9974 | 0.9981 |
| 0.0098 | 9.0 | 6021 | 0.0047 | 0.9994 | 0.9996 |
| 0.0025 | 10.0 | 6690 | 0.0000 | 1.0 | 1.0 |
| 0.0044 | 11.0 | 7359 | 0.0035 | 0.9993 | 0.9995 |
| 0.0 | 12.0 | 8028 | 0.0027 | 0.9996 | 0.9997 |
| 0.0027 | 13.0 | 8697 | 0.0036 | 0.9993 | 0.9995 |
| 0.0055 | 14.0 | 9366 | 0.0017 | 0.9998 | 0.9999 |
| 0.0 | 15.0 | 10035 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 16.0 | 10704 | 0.0000 | 1.0 | 1.0 |
| 0.0022 | 17.0 | 11373 | 0.0111 | 0.9981 | 0.9986 |
| 0.0004 | 18.0 | 12042 | 0.0011 | 0.9994 | 0.9996 |
| 0.0 | 19.0 | 12711 | 0.0020 | 0.9994 | 0.9996 |
| 0.0 | 20.0 | 13380 | 0.0023 | 0.9993 | 0.9995 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
shivaniNK8/mt5-small-finetuned-cnn-news | 5659ba0e4876e2bf26bcf73d9db3785c83fdb135 | 2022-07-15T03:42:23.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | shivaniNK8 | null | shivaniNK8/mt5-small-finetuned-cnn-news | 13 | null | transformers | 10,385 | Entry not found |
Fagen/TrueNeuromiron2 | 830b261f045d23c082dbdfd46ec9c335de9df70a | 2022-07-14T20:10:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:unlicense"
]
| text-generation | false | Fagen | null | Fagen/TrueNeuromiron2 | 13 | null | transformers | 10,386 | ---
license: unlicense
---
|
mrm8488/bloom-6b3-8bit | 477af33966d641e404c7f6b5e900dd968525835b | 2022-07-17T10:37:19.000Z | [
"pytorch",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zu",
"arxiv:2106.09685",
"transformers",
"license:bigscience-bloom-rail-1.0"
]
| text-generation | false | mrm8488 | null | mrm8488/bloom-6b3-8bit | 13 | 2 | transformers | 10,387 | ---
inference: false
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
pipeline_tag: text-generation
---
### Quantized bigscience/bloom 6B3 with 8-bit weights
Heavily inspired by [Hivemind's GPT-J-6B with 8-bit weights](https://huggingface.co/hivemind/gpt-j-6B-8bit), this is a version of [bigscience/bloom](https://huggingface.co/bigscience/bloom-6b3) a ~6 billion parameters language model that you run and fine-tune with less memory.
Here, we also apply [LoRA (Low Rank Adaptation)](https://arxiv.org/abs/2106.09685) to reduce model size.
### How to fine-tune
TBA
### How to use
This model can be used by adapting Bloom original implementation. This is an adaptation from [Hivemind's GPT-J 8-bit](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb):
```python
import transformers
import torch
import torch.nn as nn
import torch.nn.functional as F
from bitsandbytes.functional import quantize_blockwise, dequantize_blockwise
from typing import Tuple
from torch.cuda.amp import custom_fwd, custom_bwd
class FrozenBNBLinear(nn.Module):
def __init__(self, weight, absmax, code, bias=None):
assert isinstance(bias, nn.Parameter) or bias is None
super().__init__()
self.out_features, self.in_features = weight.shape
self.register_buffer("weight", weight.requires_grad_(False))
self.register_buffer("absmax", absmax.requires_grad_(False))
self.register_buffer("code", code.requires_grad_(False))
self.adapter = None
self.bias = bias
def forward(self, input):
output = DequantizeAndLinear.apply(input, self.weight, self.absmax, self.code, self.bias)
if self.adapter:
output += self.adapter(input)
return output
@classmethod
def from_linear(cls, linear: nn.Linear) -> "FrozenBNBLinear":
weights_int8, state = quantize_blockise_lowmemory(linear.weight)
return cls(weights_int8, *state, linear.bias)
def __repr__(self):
return f"{self.__class__.__name__}({self.in_features}, {self.out_features})"
class DequantizeAndLinear(torch.autograd.Function):
@staticmethod
@custom_fwd
def forward(ctx, input: torch.Tensor, weights_quantized: torch.ByteTensor,
absmax: torch.FloatTensor, code: torch.FloatTensor, bias: torch.FloatTensor):
weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
ctx.save_for_backward(input, weights_quantized, absmax, code)
ctx._has_bias = bias is not None
return F.linear(input, weights_deq, bias)
@staticmethod
@custom_bwd
def backward(ctx, grad_output: torch.Tensor):
assert not ctx.needs_input_grad[1] and not ctx.needs_input_grad[2] and not ctx.needs_input_grad[3]
input, weights_quantized, absmax, code = ctx.saved_tensors
# grad_output: [*batch, out_features]
weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
grad_input = grad_output @ weights_deq
grad_bias = grad_output.flatten(0, -2).sum(dim=0) if ctx._has_bias else None
return grad_input, None, None, None, grad_bias
class FrozenBNBEmbedding(nn.Module):
def __init__(self, weight, absmax, code):
super().__init__()
self.num_embeddings, self.embedding_dim = weight.shape
self.register_buffer("weight", weight.requires_grad_(False))
self.register_buffer("absmax", absmax.requires_grad_(False))
self.register_buffer("code", code.requires_grad_(False))
self.adapter = None
def forward(self, input, **kwargs):
with torch.no_grad():
# note: both quantuized weights and input indices are *not* differentiable
weight_deq = dequantize_blockwise(self.weight, absmax=self.absmax, code=self.code)
output = F.embedding(input, weight_deq, **kwargs)
if self.adapter:
output += self.adapter(input)
return output
@classmethod
def from_embedding(cls, embedding: nn.Embedding) -> "FrozenBNBEmbedding":
weights_int8, state = quantize_blockise_lowmemory(embedding.weight)
return cls(weights_int8, *state)
def __repr__(self):
return f"{self.__class__.__name__}({self.num_embeddings}, {self.embedding_dim})"
def quantize_blockise_lowmemory(matrix: torch.Tensor, chunk_size: int = 2 ** 20):
assert chunk_size % 4096 == 0
code = None
chunks = []
absmaxes = []
flat_tensor = matrix.view(-1)
for i in range((matrix.numel() - 1) // chunk_size + 1):
input_chunk = flat_tensor[i * chunk_size: (i + 1) * chunk_size].clone()
quantized_chunk, (absmax_chunk, code) = quantize_blockwise(input_chunk, code=code)
chunks.append(quantized_chunk)
absmaxes.append(absmax_chunk)
matrix_i8 = torch.cat(chunks).reshape_as(matrix)
absmax = torch.cat(absmaxes)
return matrix_i8, (absmax, code)
def convert_to_int8(model):
"""Convert linear and embedding modules to 8-bit with optional adapters"""
for module in list(model.modules()):
for name, child in module.named_children():
if isinstance(child, nn.Linear):
print(name, child)
setattr(
module,
name,
FrozenBNBLinear(
weight=torch.zeros(child.out_features, child.in_features, dtype=torch.uint8),
absmax=torch.zeros((child.weight.numel() - 1) // 4096 + 1),
code=torch.zeros(256),
bias=child.bias,
),
)
elif isinstance(child, nn.Embedding):
setattr(
module,
name,
FrozenBNBEmbedding(
weight=torch.zeros(child.num_embeddings, child.embedding_dim, dtype=torch.uint8),
absmax=torch.zeros((child.weight.numel() - 1) // 4096 + 1),
code=torch.zeros(256),
)
)
class BloomBlock(transformers.models.bloom.modeling_bloom.BloomBlock):
def __init__(self, config, layer_number=None):
super().__init__(config, layer_number)
convert_to_int8(self.self_attention)
convert_to_int8(self.mlp)
class BloomModel(transformers.models.bloom.modeling_bloom.BloomModel):
def __init__(self, config):
super().__init__(config)
convert_to_int8(self)
class BloomForCausalLM(transformers.models.bloom.modeling_bloom.BloomForCausalLM):
def __init__(self, config):
super().__init__(config)
convert_to_int8(self)
transformers.models.bloom.modeling_bloom.BloomBlock = BloomBlock
model_name = 'mrm8488/bloom-6b3-8bit'
model = BloomForCausalLM.from_pretrained(model_name, low_cpu_mem_usage=True)
tokenizer = BloomTokenizerFast.from_pretrained(model_name)
prompt = tokenizer("Given a table named salaries and columns id, created_at, salary, age. Creates a SQL to answer What is the average salary for 22 years old:", return_tensors='pt')
out = model.generate(**prompt, min_length=10, do_sample=True)
tokenizer.decode(out[0])
``` |
pyronear/mobilenet_v3_large | 978a0ff4419da465d71361a700e0dd0290cac291 | 2022-07-17T23:48:57.000Z | [
"pytorch",
"onnx",
"dataset:pyronear/openfire",
"arxiv:1905.02244",
"transformers",
"image-classification",
"license:apache-2.0"
]
| image-classification | false | pyronear | null | pyronear/mobilenet_v3_large | 13 | null | transformers | 10,388 | ---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- pyronear/openfire
---
# MobileNet V3 - Large model
Pretrained on a dataset for wildfire binary classification (soon to be shared). The MobileNet V3 architecture was introduced in [this paper](https://arxiv.org/pdf/1905.02244.pdf).
## Model description
The core idea of the author is to simplify the final stage, while using SiLU as activations and making Squeeze-and-Excite blocks larger.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows:
```shell
pip install pyrovision
```
or using [conda](https://anaconda.org/pyronear/pyrovision):
```shell
conda install -c pyronear pyrovision
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/pyronear/pyro-vision.git
pip install -e pyro-vision/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from pyrovision.models import model_from_hf_hub
model = model_from_hf_hub("pyronear/mobilenet_v3_large").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-1905-02244,
author = {Andrew Howard and
Mark Sandler and
Grace Chu and
Liang{-}Chieh Chen and
Bo Chen and
Mingxing Tan and
Weijun Wang and
Yukun Zhu and
Ruoming Pang and
Vijay Vasudevan and
Quoc V. Le and
Hartwig Adam},
title = {Searching for MobileNetV3},
journal = {CoRR},
volume = {abs/1905.02244},
year = {2019},
url = {http://arxiv.org/abs/1905.02244},
eprinttype = {arXiv},
eprint = {1905.02244},
timestamp = {Thu, 27 May 2021 16:20:51 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1905-02244.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{chintala_torchvision_2017,
author = {Chintala, Soumith},
month = {4},
title = {{Torchvision}},
url = {https://github.com/pytorch/vision},
year = {2017}
}
``` |
Aktsvigun/bart-base_abssum_scisummnet_23419 | 3229c7c52ba662181456b75739b74ed8450d68d1 | 2022-07-18T08:09:41.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_abssum_scisummnet_23419 | 13 | null | transformers | 10,389 | Entry not found |
Albe/housing-categories | bda03fbcd40529a0be85dbb32b23327f6a5b0289 | 2022-07-18T09:37:40.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
]
| image-classification | false | Albe | null | Albe/housing-categories | 13 | null | transformers | 10,390 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: housing-categories
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.875
---
# housing-categories
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### caravan

#### castle

#### farm

#### tree house

#### yurt
 |
icity/distilgpt2-finetuned-wikitext2 | e656aca732aa03bf32fe95fbda803cedeba20c09 | 2022-07-28T13:14:53.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | icity | null | icity/distilgpt2-finetuned-wikitext2 | 13 | null | transformers | 10,391 | Entry not found |
shivaniNK8/t5-small-finetuned-cnn-news | 1b1c41d48ecacf083dd31932f0ab32dbb07622b3 | 2022-07-19T02:37:27.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | shivaniNK8 | null | shivaniNK8/t5-small-finetuned-cnn-news | 13 | null | transformers | 10,392 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnn-news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.7231
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-news
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8412
- Rouge1: 24.7231
- Rouge2: 12.292
- Rougel: 20.5347
- Rougelsum: 23.4668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00056
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.0318 | 1.0 | 718 | 1.8028 | 24.5415 | 12.0907 | 20.5343 | 23.3386 |
| 1.8307 | 2.0 | 1436 | 1.8028 | 24.0965 | 11.6367 | 20.2078 | 22.8138 |
| 1.6881 | 3.0 | 2154 | 1.8136 | 25.0822 | 12.6509 | 20.9523 | 23.8303 |
| 1.5778 | 4.0 | 2872 | 1.8269 | 24.4271 | 11.8443 | 20.2281 | 23.0941 |
| 1.501 | 5.0 | 3590 | 1.8412 | 24.7231 | 12.292 | 20.5347 | 23.4668 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/yashar | ef6b071da5f5ec0ed0bbfb9ae2865e70f78247de | 2022-07-19T02:12:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/yashar | 13 | null | transformers | 10,393 | ---
language: en
thumbnail: http://www.huggingtweets.com/yashar/1658196662556/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1475314622332764161/tzLI4Zeb_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Yashar Ali 🐘</div>
<div style="text-align: center; font-size: 14px;">@yashar</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Yashar Ali 🐘.
| Data | Yashar Ali 🐘 |
| --- | --- |
| Tweets downloaded | 3230 |
| Retweets | 1355 |
| Short tweets | 332 |
| Tweets kept | 1543 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3n7cco99/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yashar's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ms5g8tc6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ms5g8tc6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yashar')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jinwooChoi/SKKU_AP_SA_KBT1 | 5cb5768b38da70791b8d525597f692bc875a12a9 | 2022-07-25T05:47:00.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_AP_SA_KBT1 | 13 | null | transformers | 10,394 | Entry not found |
nloc2578/3.4 | 80aa62a34920ecc6d33daef16aa0ea95b9761d37 | 2022-07-19T10:19:43.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | nloc2578 | null | nloc2578/3.4 | 13 | null | transformers | 10,395 | ---
tags:
- generated_from_trainer
model-index:
- name: '3.4'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3.4
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4891
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.915 | 0.11 | 1000 | 1.6700 |
| 1.761 | 0.22 | 2000 | 1.5926 |
| 1.6732 | 0.33 | 3000 | 1.5583 |
| 1.67 | 0.45 | 4000 | 1.5301 |
| 1.6782 | 0.56 | 5000 | 1.5151 |
| 1.6471 | 0.67 | 6000 | 1.4972 |
| 1.5983 | 0.78 | 7000 | 1.4906 |
| 1.5889 | 0.89 | 8000 | 1.4891 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
|
nev/byt5-song-lyrics | 3208b735c440e7cf889f6ff388d781177a054b19 | 2022-07-20T10:46:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"transformers",
"music",
"byt5",
"license:isc",
"autotrain_compatible"
]
| text2text-generation | false | nev | null | nev/byt5-song-lyrics | 13 | null | transformers | 10,396 | ---
language:
- en
tags:
- music
- t5
- byt5
license: "isc"
metrics:
- accuracy
---
# ByT5 Song Lyrics
This is a Seq2Seq model trained on a karaoke dataset to predict syllables with pitch and timing from song lyrics.
As of writing, the model has only been trained on 1/2 of the full dataset. Expect the quality to improve later.
The Huggingface demo seems to produce outputs with a small sequence length. So what you see on the right will only make a prediction for the first two syllables. |
figurative-nlp/English-Simile-Generation | 33e1eec96b3c42559433b2e69fcf1393d91102ac | 2022-07-20T01:54:41.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | figurative-nlp | null | figurative-nlp/English-Simile-Generation | 13 | null | transformers | 10,397 | English-Simile-Generation is a seq2seq paraphrase model which can transform sentence A to sentence B containing figurative or simile expression.
A: Now I feel sad to see your scientific research progress is so slow.
B: Now I feel sad to see your scientific research progress is as slow as snail.
**To our knowledge, our model have better performance,diversity and practicability compared to EMNLP21 paper (Generating similes effortlessly like a Pro:
A Style Transfer Approach for Simile Generation). **
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("figurative-nlp/English-Simile-Generation")
model = AutoModelForSeq2SeqLM.from_pretrained("figurative-nlp/Ehinese-Simile-Generation")
input_ids = tokenizer(
"Adrenaline shot through him powerful", return_tensors="pt"
).input_ids
outputs = model.generate(input_ids,num_beams = 5,max_length = 64)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
#result : Adrenaline shot through him like an electric current |
ClassCat/gpt2-small-greek-v2 | a6aeea86d42eee665b1195f2f5776078d370fbb2 | 2022-07-23T09:26:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"el",
"dataset:cc100",
"dataset:oscar",
"dataset:wikipedia",
"transformers",
"license:cc-by-sa-4.0"
]
| text-generation | false | ClassCat | null | ClassCat/gpt2-small-greek-v2 | 13 | 1 | transformers | 10,398 | ---
language: el
license: cc-by-sa-4.0
datasets:
- cc100
- oscar
- wikipedia
widget:
- text: "Αυτό είναι ένα"
- text: "Ανοιξα την"
- text: "Ευχαριστώ για το"
- text: "Έχει πολύ καιρό που δεν έχουμε"
---
## Greek GPT2 small model Version 2 (Uncased)
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses approximately half the size of GPT2 base model parameters.
### Tokenizer
Using BPE tokenizer with vocabulary size 50,000.
### Training Data
* Subset of [CC-100/el](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
* Subset of [oscar](https://huggingface.co/datasets/oscar)
* [wiki40b/el](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bel) (Greek Wikipedia)
### Usage
```python
from transformers import pipeline
generator = pipeline('text-generation', model='ClassCat/gpt2-small-greek-v2')
generator("Αυτό είναι ένα", max_length=50, num_return_sequences=5)
```
|
CennetOguz/gpt2-kit-st | 1a8daec4d4ea0eae8d87371690eaf65b61524da6 | 2022-07-20T09:47:18.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-generation | false | CennetOguz | null | CennetOguz/gpt2-kit-st | 13 | null | transformers | 10,399 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-kit-st
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-kit-st
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 94.6020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 102.7407 |
| No log | 2.0 | 6 | 102.7407 |
| No log | 3.0 | 9 | 94.6020 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+17540c5
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.