modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-yo-fi | 82b100a0d8c4e8ca07f63a325b68032af8abd99b | 2021-09-11T10:52:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"yo",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-yo-fi | 7 | null | transformers | 13,900 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-yo-fi
* source languages: yo
* target languages: fi
* OPUS readme: [yo-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.fi | 21.5 | 0.434 |
|
HungChau/distilbert-base-uncased-concept-extraction-iir-v1.2-concept-extraction-kp20k-v1.5 | 7201c10e462c55673cf29cc6a82dfb788fd10c24 | 2021-11-19T20:06:27.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-iir-v1.2-concept-extraction-kp20k-v1.5 | 7 | null | transformers | 13,901 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.0-concept-extraction-wikipedia-v1.1 | d0535c7cb9413d5f25e106a0788606b4620dccd7 | 2021-11-12T05:36:19.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.0-concept-extraction-wikipedia-v1.1 | 7 | null | transformers | 13,902 | Entry not found |
Hyeon/distilbert-base-uncased-finetuned-cola | d7573276596c25b5552848e603fcf74b487980f9 | 2022-01-19T10:16:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Hyeon | null | Hyeon/distilbert-base-uncased-finetuned-cola | 7 | null | transformers | 13,903 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5442538936990396
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8575
- Matthews Correlation: 0.5443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5242 | 1.0 | 535 | 0.5258 | 0.4391 |
| 0.346 | 2.0 | 1070 | 0.5264 | 0.5074 |
| 0.2334 | 3.0 | 1605 | 0.6808 | 0.5074 |
| 0.1711 | 4.0 | 2140 | 0.7737 | 0.5373 |
| 0.1205 | 5.0 | 2675 | 0.8575 | 0.5443 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
JAlexis/Bertv1_fine | 2a512b5c5de8f07969436c38dadd2ac1fcda067a | 2022-03-01T22:33:49.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad2",
"dataset:cord19",
"transformers",
"autotrain_compatible"
]
| question-answering | false | JAlexis | null | JAlexis/Bertv1_fine | 7 | null | transformers | 13,904 | ---
language: en
tags:
- pytorch
- question-answering
datasets:
- squad2
- cord19
metrics:
- f1
widget:
- text: "How can I protect myself against covid-19?"
context: "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19)."
- text: "How can I protect myself against covid-19?"
context: " "
---
## Model description
This model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.
## How to use
```python
from transformers.pipelines import pipeline
model_name = "JAlexis/PruebaBert"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'How can I protect myself against covid-19?',
'context': 'Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19). ',
'question': 'How can I protect myself against covid-19?',
'context': ' ',
}
nlp(inputs)
```
## Overview
```
Language model: deepset/bert-base-cased-squad2
Language: English
Downstream-task: Q&A
Datasets: CORD-19 from 31rd January 2022
Code: Haystack and FARM
Infrastructure: Tesla T4
```
## Hyperparameters
```
batch_size = 8
n_epochs = 7
max_seq_len = max_length
learning_rate = AdamW: 2e-5
```
|
JSv4/layoutlmv2-finetuned-funsd-test | 9f780e942c27780dd7fbf58197aeb404f95a6931 | 2021-12-02T07:48:37.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | JSv4 | null | JSv4/layoutlmv2-finetuned-funsd-test | 7 | null | transformers | 13,905 | Entry not found |
Jedi33/tonystarkAI | 287da154ae552625e3e90d4516c216e2c0db026e | 2021-09-03T19:33:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Jedi33 | null | Jedi33/tonystarkAI | 7 | null | transformers | 13,906 | ---
tags:
- conversational
---
# Tony Stark |
Jeska/VaccinChatSentenceClassifierDutch_fromBERTje | 821d3e496ffae1b23553f2dba7e1a3155124338f | 2021-12-07T09:39:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | Jeska | null | Jeska/VaccinChatSentenceClassifierDutch_fromBERTje | 7 | null | transformers | 13,907 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: VaccinChatSentenceClassifierDutch_fromBERTje
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTje
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6223
- Accuracy: 0.9068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.4666 | 1.0 | 1320 | 2.3355 | 0.5768 |
| 1.5293 | 2.0 | 2640 | 1.1118 | 0.8144 |
| 0.8031 | 3.0 | 3960 | 0.6362 | 0.8803 |
| 0.2985 | 4.0 | 5280 | 0.5119 | 0.8958 |
| 0.1284 | 5.0 | 6600 | 0.5023 | 0.8931 |
| 0.0842 | 6.0 | 7920 | 0.5246 | 0.9022 |
| 0.0414 | 7.0 | 9240 | 0.5581 | 0.9013 |
| 0.0372 | 8.0 | 10560 | 0.5721 | 0.9004 |
| 0.0292 | 9.0 | 11880 | 0.5469 | 0.9141 |
| 0.0257 | 10.0 | 13200 | 0.5871 | 0.9059 |
| 0.0189 | 11.0 | 14520 | 0.6181 | 0.9049 |
| 0.0104 | 12.0 | 15840 | 0.6184 | 0.9068 |
| 0.009 | 13.0 | 17160 | 0.6013 | 0.9049 |
| 0.0051 | 14.0 | 18480 | 0.6205 | 0.9059 |
| 0.0035 | 15.0 | 19800 | 0.6223 | 0.9068 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Jeska/VaccinChatSentenceClassifierDutch_fromBERTjeDIAL | d5960305f91ee4c82e24fa83ee4ea1680bd49307 | 2021-12-02T08:29:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | Jeska | null | Jeska/VaccinChatSentenceClassifierDutch_fromBERTjeDIAL | 7 | null | transformers | 13,908 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: VaccinChatSentenceClassifierDutch_fromBERTjeDIAL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTjeDIAL
This model is a fine-tuned version of [Jeska/BertjeWDialDataQA20k](https://huggingface.co/Jeska/BertjeWDialDataQA20k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8355
- Accuracy: 0.6322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.4418 | 1.0 | 1457 | 2.3866 | 0.5406 |
| 1.7742 | 2.0 | 2914 | 1.9365 | 0.6069 |
| 1.1313 | 3.0 | 4371 | 1.8355 | 0.6322 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Jinhwan/krelectra-base-mecab | 443b641006853375598e2d6e5bb7a292c505156a | 2022-01-12T03:18:55.000Z | [
"pytorch",
"electra",
"pretraining",
"ko",
"transformers",
"korean",
"license:apache-2.0"
]
| null | false | Jinhwan | null | Jinhwan/krelectra-base-mecab | 7 | 1 | transformers | 13,909 | ---
language: ko
license: apache-2.0
tags:
- korean
---
# KrELECTRA-base-mecab
Korean-based Pre-trained ELECTRA Language Model using Mecab (Morphological Analyzer)
## Usage
### Load model and tokenizer
```python
>>> from transformers import AutoTokenizer, AutoModelForPreTraining
>>> model = AutoModelForPreTraining.from_pretrained("Jinhwan/krelectra-base-mecab")
>>> tokenizer = AutoTokenizer.from_pretrained("Jinhwan/krelectra-base-mecab")
```
### Tokenizer example
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("Jinhwan/krelectra-base-mecab")
>>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]")
['[CLS]', '한국어', 'EL', '##ECT', '##RA', '##를', '공유', '##합', '##니다', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'EL', '##ECT', '##RA', '##를', '공유', '##합', '##니다', '.', '[SEP]'])
[2, 7214, 24023, 24663, 26580, 3195, 7086, 3746, 5500, 17, 3]
|
Jllama/dialoGPT-small-Joshua-test | 9d20173ab8ed6295b88d7c8e2f7892ef3a6073c6 | 2021-06-02T06:46:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Jllama | null | Jllama/dialoGPT-small-Joshua-test | 7 | null | transformers | 13,910 | ---
tags:
- conversational
---
# My Awesome Model |
JonatanGk/roberta-base-ca-finetuned-tecla | 9fd0f84bd81d358c24839a919d8e7639ee108185 | 2021-10-22T14:20:10.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:tecla",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | JonatanGk | null | JonatanGk/roberta-base-ca-finetuned-tecla | 7 | 1 | transformers | 13,911 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tecla
metrics:
- accuracy
model-index:
- name: roberta-base-ca-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tecla
type: tecla
args: tecla
metrics:
- name: Accuracy
type: accuracy
value: 0.7361816335412737
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ca-finetuned-mnli
This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the tecla dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9354
- Accuracy: 0.7362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8465 | 1.0 | 6888 | 0.8222 | 0.6990 |
| 0.6966 | 2.0 | 13776 | 0.7872 | 0.7157 |
| 0.5643 | 3.0 | 20664 | 0.8060 | 0.7268 |
| 0.4435 | 4.0 | 27552 | 0.8470 | 0.7333 |
| 0.3206 | 5.0 | 34440 | 0.9354 | 0.7362 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Jour/m2m100_418M-fr | 8d440a62355cdcd7cb2fba0f8ae7c2cf1bd47d37 | 2022-02-17T13:41:07.000Z | [
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| translation | false | Jour | null | Jour/m2m100_418M-fr | 7 | null | transformers | 13,912 | ---
license: mit
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: m2m100_418M-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-fr
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.0+cpu
- Datasets 1.16.1
- Tokenizers 0.10.3
|
KBLab/bert-base-swedish-cased-new | df27b4271147720d0b386d66c01a5c87767e5162 | 2022-03-17T11:10:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"sv",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | KBLab | null | KBLab/bert-base-swedish-cased-new | 7 | null | transformers | 13,913 | ---
language:
- sv
---
# 🤗 BERT Swedish
This BERT model was trained using the 🤗 transformers library.
The size of the model is a regular BERT-base with 110M parameters.
The model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.
To avoid excessive padding documents shorter than 512 tokens were concatenated into one large sequence of 512 tokens, and larger documents were split into multiple 512 token sequences, following https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py
Training was done for a bit more than 8 epochs with a batch size of 2048, resulting in a little less than 125k training steps.
The model has three sister models trained on the same dataset:
- [Megatron-BERT-base-125k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-125k)
- [Megatron-BERT-base-600k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-600k)
- [Megatron-BERT-large-110k](https://huggingface.co/KBLab/megatron-bert-large-swedish-cased-110k)
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium (https://www.hpc-rivr.si) and EuroHPC JU (https://eurohpc-ju.europa.eu) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (https://www.izum.si). |
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-5-001 | 18bfebff135f4939e28e5f60d74989869b6dd512 | 2021-12-15T19:10:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Katsiaryna | null | Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-5-001 | 7 | null | transformers | 13,914 | Entry not found |
Kyoungmin/beauty-base-KLCP | 848a6a959c66a6d063e07d9d148cf61d9a5550bf | 2021-08-25T06:35:36.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | Kyoungmin | null | Kyoungmin/beauty-base-KLCP | 7 | null | transformers | 13,915 | This is **KOREAN** Bert Masked LM pretrained model adapted in **BEAUTY** domain. (BertForMaskedLM)
About 60,000 reviews were used.
It was fine-tuned based on _beomi/kcbert-base_ model weights.
Enjoy! |
LilaBoualili/bert-sim-doc | 8e9630547a8fc9d8be8c535c84bfb11638ee98f7 | 2021-05-20T09:57:43.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | LilaBoualili | null | LilaBoualili/bert-sim-doc | 7 | null | transformers | 13,916 | Entry not found |
Lumos/imdb2 | 6fb1a6d9df52abdd85463200f954b2b7bc38ebe2 | 2021-12-08T10:07:31.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Lumos | null | Lumos/imdb2 | 7 | null | transformers | 13,917 | Entry not found |
M-FAC/bert-mini-finetuned-mnli | 780061727f47254ff763de653920bb8b7e2fd5f2 | 2021-12-13T08:11:07.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"transformers"
]
| text-classification | false | M-FAC | null | M-FAC/bert-mini-finetuned-mnli | 7 | null | transformers | 13,918 | # BERT-mini model finetuned with M-FAC
This model is finetuned on MNLI dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on MNLI validation set:
```bash
matched_accuracy = 75.13
mismatched_accuracy = 75.93
```
Mean and standard deviation for 5 runs on MNLI validation set:
| | Matched Accuracy | Mismatched Accuracy |
|:-----:|:----------------:|:-------------------:|
| Adam | 73.30 ± 0.20 | 74.85 ± 0.09 |
| M-FAC | 74.59 ± 0.41 | 75.95 ± 0.14 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 8276 \
--model_name_or_path prajjwal1/bert-mini \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
|
M-FAC/bert-tiny-finetuned-qqp | e64aee5fda83815035c5d478dd527adb78c5650b | 2021-12-13T08:14:56.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"transformers"
]
| text-classification | false | M-FAC | null | M-FAC/bert-tiny-finetuned-qqp | 7 | null | transformers | 13,919 | # BERT-tiny model finetuned with M-FAC
This model is finetuned on QQP dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on QQP validation set:
```bash
f1 = 79.84
accuracy = 84.40
```
Mean and standard deviation for 5 runs on QQP validation set:
| | F1 | Accuracy |
|:----:|:-----------:|:----------:|
| Adam | 77.58 ± 0.08 | 81.09 ± 0.15 |
| M-FAC | 79.71 ± 0.13 | 84.29 ± 0.08 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 1234 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name qqp \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
|
Maelstrom77/rtevib | cf8e2c16e8033610764330f32514cfb8b8eb13a7 | 2021-11-01T12:19:25.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Maelstrom77 | null | Maelstrom77/rtevib | 7 | null | transformers | 13,920 | Entry not found |
Maha/OGBV-gender-twtrobertabase-en-founta_final | f89a9f1d6fbe4159453bd04d1f99e73f6aee4d01 | 2022-02-19T17:10:31.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Maha | null | Maha/OGBV-gender-twtrobertabase-en-founta_final | 7 | null | transformers | 13,921 | Entry not found |
Maha/hin-trac1_fin | 04d02a59844416ce825a7cf9d19b8207668fd37e | 2022-02-22T06:13:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Maha | null | Maha/hin-trac1_fin | 7 | 1 | transformers | 13,922 | Entry not found |
Majed/internet2 | db2aa576afb2c3fc8819b2c2c4eaa9ddb46373b4 | 2021-09-08T20:55:55.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | Majed | null | Majed/internet2 | 7 | null | transformers | 13,923 | Entry not found |
MalawiUniST/ISO6392.nya.ny | 2c783c310e11475d4c18388a690e918ebc605b61 | 2021-04-07T14:30:00.000Z | [
"pytorch",
"longformer",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | MalawiUniST | null | MalawiUniST/ISO6392.nya.ny | 7 | null | transformers | 13,924 | This model trained on nyanja dataset in Longformer |
Maniac/wav2vec2-xls-r-urdu | 879996c550a964cf03580e983e74afd235c251a2 | 2022-03-24T11:51:48.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ur",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"sv",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Maniac | null | Maniac/wav2vec2-xls-r-urdu | 7 | 1 | transformers | 13,925 | ---
language:
- ur
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- sv
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: ''
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: ur
metrics:
- name: Test WER
type: wer
value: 67.48
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5614
- Wer: 0.6765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.9115 | 20.83 | 500 | 1.5400 | 0.7280 |
| 0.1155 | 41.67 | 1000 | 1.5614 | 0.6765 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0 |
Media1129/keyword-tag-model-4000-9-16 | 1eddcfb8940746e4dda342d95fffb47a1dd665d8 | 2021-09-17T00:54:06.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Media1129 | null | Media1129/keyword-tag-model-4000-9-16 | 7 | null | transformers | 13,926 | Entry not found |
Media1129/keyword-tag-model-4000-9-16_more_ingredient | 74df0a21a96928a5b984dcb284b7e44701db5216 | 2021-09-17T02:07:04.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Media1129 | null | Media1129/keyword-tag-model-4000-9-16_more_ingredient | 7 | null | transformers | 13,927 | Entry not found |
Media1129/keyword-tag-model-6000-v2 | 5e6c808b9705212dbe4209e42bca42d0f8138f1b | 2021-08-30T05:42:30.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Media1129 | null | Media1129/keyword-tag-model-6000-v2 | 7 | null | transformers | 13,928 | Entry not found |
MickyMike/0-GPT2SP-aptanastudio | c899c4f44747db3ada35fc472ff3e1966993053e | 2021-08-19T02:00:06.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-aptanastudio | 7 | null | transformers | 13,929 | Entry not found |
MickyMike/0-GPT2SP-bamboo | 8b243d64bfde6f68db31e60a92d7b2120d44d282 | 2021-08-19T02:00:19.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-bamboo | 7 | null | transformers | 13,930 | Entry not found |
MickyMike/0-GPT2SP-clover | 3c646cd58d88490372ab0a41993f7d3cd7f2a3cb | 2021-08-19T02:00:33.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-clover | 7 | null | transformers | 13,931 | Entry not found |
MickyMike/0-GPT2SP-datamanagement | 18660226b603e409872dc844d43594b652d708d2 | 2021-08-19T02:00:45.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-datamanagement | 7 | null | transformers | 13,932 | Entry not found |
MickyMike/0-GPT2SP-moodle | cd394578bdb488152d1a537ba487c8611d1cf7d7 | 2021-08-19T02:01:40.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-moodle | 7 | null | transformers | 13,933 | Entry not found |
MickyMike/000-GPT2SP-talendesb-mesos | 55730e2ec35ccb78b602ea16ad5392e7bc30b45c | 2021-08-15T11:11:24.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/000-GPT2SP-talendesb-mesos | 7 | null | transformers | 13,934 | Entry not found |
MickyMike/1-GPT2SP-mule | 3a571a2e6b154e82995800c2ea148195b3136ce6 | 2021-08-15T13:45:30.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-mule | 7 | null | transformers | 13,935 | Entry not found |
MickyMike/6-GPT2SP-jirasoftware | dbcc508c4b49ee788437399ca199d3c50df95814 | 2021-08-30T02:32:57.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/6-GPT2SP-jirasoftware | 7 | null | transformers | 13,936 | Entry not found |
MickyMike/graphcodebert-c | 21740675d2500eccf279feb3dec74e5a1e3d418d | 2021-10-03T17:48:37.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | MickyMike | null | MickyMike/graphcodebert-c | 7 | null | transformers | 13,937 | Entry not found |
Milian/bert_finetuning_test | 13982ec68bb135b431d18a97c2228c1bbcf1519b | 2021-05-18T21:41:58.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Milian | null | Milian/bert_finetuning_test | 7 | null | transformers | 13,938 | Entry not found |
MultiBertGunjanPatrick/multiberts-seed-0-1000k | 75fa8117d87586fb1c22ca4020a1c96e0ff092d1 | 2021-10-04T04:57:08.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-0",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-0-1000k | 7 | null | transformers | 13,939 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-0
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 Checkpoint 1000k (uncased)
Seed 0 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1000k')
model = BertModel.from_pretrained("multiberts-seed-0-1000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-0-300k | b2384a8b1077c660d43552bd1da2dbd0587fec2a | 2021-10-04T04:56:16.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-0",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-0-300k | 7 | null | transformers | 13,940 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-0
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 Checkpoint 300k (uncased)
Seed 0 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-300k')
model = BertModel.from_pretrained("multiberts-seed-0-300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
NanniKirby/DialoGPT-medium-bapi | 54cb28f8869ee282399205cfc74404db102b494f | 2021-09-29T13:39:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | NanniKirby | null | NanniKirby/DialoGPT-medium-bapi | 7 | null | transformers | 13,941 | ---
tags:
- conversational
---
# Bapibot |
Navigator/DialoGPT-medium-martymcfly | 9314e922bb6fa36bfc01f54d3a08dc2f5c405d89 | 2022-02-17T17:33:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Navigator | null | Navigator/DialoGPT-medium-martymcfly | 7 | 1 | transformers | 13,942 | ---
tags:
- conversational
---
#Marty McFly model |
Navya2608/DialoGPT-medium-chandler | 8e56960ff94b2d06b2915a3ac0bda962e2866ff3 | 2021-11-05T14:37:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Navya2608 | null | Navya2608/DialoGPT-medium-chandler | 7 | null | transformers | 13,943 | ---
tags:
- conversational
---
# Chandler Bing DialoGPT Model |
NikolajW/BaselineThesis | 13ffff6025812b785a8efce69c3b8b1568b9a3cb | 2021-11-04T19:33:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | NikolajW | null | NikolajW/BaselineThesis | 7 | null | transformers | 13,944 | Entry not found |
Nisarg2701/DialoGPT-medium-Rick | 85e97c90812515d470dcb062eb9ecff4ad2d3158 | 2021-09-09T17:47:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Nisarg2701 | null | Nisarg2701/DialoGPT-medium-Rick | 7 | null | transformers | 13,945 | ---
tags:
- conversational
---
license: apache-2.0
---
### Rick DialoGPT Model |
Philipuss/GPT-Macbeth | 65f11ee74f606c98166e241eedf82d4e1b783f2c | 2021-11-01T02:16:42.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"transformers"
]
| null | false | Philipuss | null | Philipuss/GPT-Macbeth | 7 | 1 | transformers | 13,946 | ### **GPT-Macbeth**
A custom finetune of GPT-2 trained on a custom dataset of victorian literature
## Information
The goal of this finetune is to output high-quality victorian literature, while being customizable with Author's Note and being light to run (aka not being a GPT-Neo or GPT-Jax finetune, for now at least).
## Authors Note
Author's Note was added manually, so please appreciate it. :)
The format of it is [ Author: George Eliot; Genre: Horror, fantasy, novel; Tags: scary, magical, victorian ]
Some words will work well, some won't. Please make sure to have spaces before each ][.
Most popular victorian authors should work, but keep in mind that some authors (e.g. Mark Twain) will result in a somewhat weird behavior due to a quirk in the dataset that will be addressed in the next version of the finetune.
When it comes to the genres, "novel", "fiction", "horror" and "romance" work best, but from playing around with it, I've noticed that most other not too specific genres work pretty well too.
The tags are a bit complicated. Adding "normal" will result in a story without anything special (like no magic or fantasy element) and tends to be pretty low-pace. Using "real-life" will push the AI towards a historical/biographical path. Almost all tags should work. Using "man" or "woman" is supposed to semi-determine what gender the main character is, but it heavily depends on the chosen author.
## History
Version 0 - This was the first test version of the finetune, trained on GPT-2-small and with a really small dataset. The name was GPT-Kelini before it was renamed to GPT-Macbeth in V1.
Version 1 - The current version of the finetune. Trained on GPT-2-medium with a much, much bigger dataset compared to V0. Supports Author's Note
### Notes
Please use a very low temperature/randomness when using it, if you want to get anything out of it. Pumping the repetition penalty up helps a lot too.
The model was specifically converted to PyTorch so that most front-end GUI's should run it. It has been only tested on KoboldAI, but should theoretically work on others too.
For some odd reason, my finetune is capable of writing victorian NSFW content, if used the right way. No NSFW was in the dataset and considering the size of the model, it's really odd to see it do so. Perhaps the countless romantic novels in the dataset had something naughty in them, but I highly doubt it.
You may sometimes get roman numerals on random occasions, this shouldn't happen often, but if it does, it's again something that will be (manually, unfortunately) addressed in the next version of the finetune.
If you are wondering why I renamed my finetune to Macbeth, there are a few reasons: First, it sounds much better and smoother than Kelini, second, it's a play by Shakespeare that closely matches the writing style of some of the authors in my dataset, and third, the most important reason, it's was mentioned in Hamilton, so yes, my love with Hamilton is bleeding everywhere and yes, the next version of the dataset will try to have a Hamilton easter egg featuring the Author's Note.
### Credits
I want to thank HuggingFace for their tokenizer and everything they've done to make everything easier. Then is OpenAI for making GPT-2. I also want to thank most active people on the AIM Discord server in the community-projects channel. Thanks to Bran for finding a way to convert checkpoints to a PyTorch model, thanks to Mr. Seeker and Aedial for helping me in cleaning the dataset and to *finetune* from the NovelAI team for perhaps making my finetune output much better quality by telling me about the magic of the <\|endoftext\|> token.
P.S. If you happen to use it in something commercial or in an online demo or in any other way that is not for personal use, a credit will be greatly appreciated (and if you do something exciting with it, make sure to let me know, I'd be more than happy to see it being used by someone!).
|
Plim/xls-r-1b-fr | 451cd80d6eac803e300e30f6e5d0f9511f5310d5 | 2022-02-04T11:45:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Plim | null | Plim/xls-r-1b-fr | 7 | null | transformers | 13,947 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2464
- Wer: 0.2220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0326 | 0.32 | 1000 | 0.3092 | 0.2718 |
| 1.0828 | 0.65 | 2000 | 0.2843 | 0.2606 |
| 1.0771 | 0.97 | 3000 | 0.2774 | 0.2488 |
| 1.0306 | 1.3 | 4000 | 0.2588 | 0.2351 |
| 1.0052 | 1.62 | 5000 | 0.2483 | 0.2284 |
| 0.9865 | 1.94 | 6000 | 0.2464 | 0.2220 |
| 0.978 | 2.27 | 7000 | 0.2514 | 0.2172 |
| 1.7438 | 2.59 | 8000 | 0.7983 | 0.5072 |
| 2.3309 | 2.92 | 9000 | 1.8917 | 0.9416 |
| 2.1834 | 3.24 | 10000 | 1.7496 | 0.9030 |
| 2.3047 | 3.56 | 11000 | 1.5377 | 0.8747 |
| 2.1378 | 3.89 | 12000 | 1.3501 | 0.7923 |
| 1.9812 | 4.21 | 13000 | 1.2662 | 0.7697 |
| 2.6855 | 4.54 | 14000 | 2.4120 | 0.9902 |
| 2.7482 | 4.86 | 15000 | 2.5341 | 0.9874 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Prompsit/paraphrase-bert-pt | 6e8756251c13e463728ec985bab2ca67d5cb43c6 | 2021-12-23T12:05:52.000Z | [
"pytorch",
"bert",
"text-classification",
"pt",
"transformers"
]
| text-classification | false | Prompsit | null | Prompsit/paraphrase-bert-pt | 7 | 2 | transformers | 13,948 | ---
pipeline_tag: text-classification
inference: false
language: pt
tags:
- transformers
---
# Prompsit/paraphrase-bert-pt
This model allows to evaluate paraphrases for a given phrase.
We have fine-tuned this model from pretrained "neuralmind/bert-base-portuguese-cased".
Model built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.
# How to use it
The model answer the following question: Is "phrase B" a paraphrase of "phrase A".
Please note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.
Resulting probabilities correspond to classes:
* 0: Not a paraphrase
* 1: It's a paraphrase
So, considering the phrase "logo após o homicídio" and a candidate paraphrase like "pouco depois do assassinato", you can use the model like this:
```
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Prompsit/paraphrase-bert-pt")
model = AutoModelForSequenceClassification.from_pretrained("Prompsit/paraphrase-bert-pt")
input = tokenizer('logo após o homicídio','pouco depois do assassinato',return_tensors='pt')
logits = model(**input).logits
soft = torch.nn.Softmax(dim=1)
print(soft(logits))
```
Code output is:
```
tensor([[0.2137, 0.7863]], grad_fn=<SoftmaxBackward>)
```
As the probability of 1 (=It's a paraphrase) is 0.7863 and the probability of 0 (=It is not a paraphrase) is 0.2137, we can conclude, for our previous example, that "pouco depois do assassinato" is a paraphrase of "logo após o homicidio".
# Evaluation results
We have used as test dataset 16500 pairs of phrases human tagged.
Metrics obtained are:
```
metrics={
'test_loss': 0.6074697375297546,
'test_accuracy': 0.7809,
'test_precision': 0.7157638466220329,
'test_recall': 0.40551724137931033,
'test_f1': 0.5177195685670262,
'test_matthews_correlation': 0.41603913834665324,
'test_runtime': 16.4585,
'test_samples_per_second': 607.587,
'test_steps_per_second': 19.017
}
``` |
RASMUS/wav2vec2-xlsr-fi-lm-1B | 45aab31314aa44c14a2fd4f766251b8cd0ccf5ab | 2022-03-24T11:51:54.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | RASMUS | null | RASMUS/wav2vec2-xlsr-fi-lm-1B | 7 | 1 | transformers | 13,949 | ---
language:
- fi
license: apache-2.0
tags:
- generated_from_trainer
- automatic-speech-recognition
- robust-speech-event
- hf-asr-leaderboard
model-index:
- name: wav2vec2-xlsr-fi-lm-1B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-fi-lm-1B
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common voice train/dev/other datasets.
It achieves the following results on the evaluation set without language model:
- Loss: 0.1853
- Wer: 0.2205
With language model:
- Wer: 0.1026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8158 | 0.67 | 400 | 0.4835 | 0.6310 |
| 0.5679 | 1.33 | 800 | 0.4806 | 0.5538 |
| 0.6055 | 2.0 | 1200 | 0.3888 | 0.5083 |
| 0.5353 | 2.67 | 1600 | 0.3258 | 0.4365 |
| 0.4883 | 3.33 | 2000 | 0.3313 | 0.4204 |
| 0.4513 | 4.0 | 2400 | 0.2924 | 0.3904 |
| 0.3753 | 4.67 | 2800 | 0.2593 | 0.3608 |
| 0.3478 | 5.33 | 3200 | 0.2832 | 0.3551 |
| 0.3796 | 6.0 | 3600 | 0.2495 | 0.3402 |
| 0.2556 | 6.67 | 4000 | 0.2342 | 0.3106 |
| 0.229 | 7.33 | 4400 | 0.2181 | 0.2812 |
| 0.205 | 8.0 | 4800 | 0.2041 | 0.2523 |
| 0.1654 | 8.67 | 5200 | 0.2015 | 0.2416 |
| 0.152 | 9.33 | 5600 | 0.1942 | 0.2294 |
| 0.1569 | 10.0 | 6000 | 0.1853 | 0.2205 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
RTurk/DialoGPT-small-TIMBOT | 58b650ee8fd327c582f7ea56a6d00068cff61686 | 2021-10-07T15:51:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | RTurk | null | RTurk/DialoGPT-small-TIMBOT | 7 | null | transformers | 13,950 | ---
tags:
- conversational
---
# TIMBOT DialoGPT model |
Rachneet/t5-base-qg-hl-squadv2 | 724b2019e3926bf2464cb5994b85673ccd6d74d1 | 2021-06-23T03:54:18.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"dataset:squad",
"arxiv:1910.10683",
"transformers",
"question-generation",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | Rachneet | null | Rachneet/t5-base-qg-hl-squadv2 | 7 | null | transformers | 13,951 | ---
datasets:
- squad
tags:
- question-generation
widget:
- text: "<hl> 42 <hl> is the answer to life, the universe and everything. </s>"
- text: "Python is a programming language. It is developed by <hl> Guido Van Rossum <hl>. </s>"
- text: "Although <hl> practicality <hl> beats purity </s>"
license: mit
---
### T5 for question-generation
This is [t5-base](https://arxiv.org/abs/1910.10683) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens.
You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens and end the text with `</s>`. For example
`<hl> 42 <hl> is the answer to life, the universe and everything. </s>`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
|
SEBIS/code_trans_t5_base_code_documentation_generation_java | 271367173cbdf34119d93444a3c5aa0edee5522a | 2021-06-23T04:20:17.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_documentation_generation_java | 7 | null | transformers | 13,952 | ---
tags:
- summarization
widget:
- text: "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
---
# CodeTrans model for code documentation generation java
Pretrained model on programming language java using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus java dataset.
## Intended uses & limitations
The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_java"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_java", skip_special_tokens=True),
device=0
)
tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/java/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_code_documentation_generation_php_multitask_finetune | 047be2fc2958bdae9296c5c20f9da17af13bcc0b | 2021-06-23T04:39:03.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_documentation_generation_php_multitask_finetune | 7 | null | transformers | 13,953 | ---
tags:
- summarization
widget:
- text: "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
---
# CodeTrans model for code documentation generation php
Pretrained model on programming language php using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the php function/method.
## Intended uses & limitations
The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_php_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_php_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/php/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code.
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_code_documentation_generation_javascript | 37982453dfd09ba2c283c304456ace7b28989efd | 2021-06-23T10:03:55.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_code_documentation_generation_javascript | 7 | null | transformers | 13,954 | ---
tags:
- summarization
widget:
- text: "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"
---
# CodeTrans model for code documentation generation javascript
Pretrained model on programming language javascript using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus javascript dataset.
## Intended uses & limitations
The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_javascript"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_javascript", skip_special_tokens=True),
device=0
)
tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/javascript/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_csharp | 466d97dba9f99dd4d44361fd3aa9eb1d08cc344d | 2021-06-23T10:19:38.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_source_code_summarization_csharp | 7 | null | transformers | 13,955 | ---
tags:
- summarization
widget:
- text: "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
---
# CodeTrans model for source code summarization csharp
Pretrained model on programming language csharp using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization csharp dataset.
## Intended uses & limitations
The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp", skip_special_tokens=True),
device=0
)
tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/source%20code%20summarization/csharp/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/legal_t5_small_multitask_fr_en | 9711b850b49f1d8154f6d3eddd70861b1eeb8354 | 2021-06-23T11:10:07.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French English model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_fr_en | 7 | null | transformers | 13,956 |
---
language: French English
tags:
- translation French English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Raül Romeva i Rueda (Verts/ALE)"
---
# legal_t5_small_multitask_fr_en model
Model on translating legal text from French to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_fr_en model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from French to English.
### How to use
Here is how to use this model to translate legal text from French to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_fr_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_fr_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "Raül Romeva i Rueda (Verts/ALE)"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_fr_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_fr_en | 39.123|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_de_small_finetuned | 999adff74a791e55b4ddda3ecd9ea213ad0afb0d | 2021-06-23T11:30:18.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Cszech Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Deustch model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_cs_de_small_finetuned | 7 | null | transformers | 13,957 |
---
language: Cszech Deustch
tags:
- translation Cszech Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Vzhledem k tomu, že tento právní předpis bude přímo použitelný v členských státech a zavede mnoho povinností pro ty, na něž se vztahuje, je žádoucí, aby se jim poskytlo více času na přizpůsobení se těmto novým pravidlům."
---
# legal_t5_small_trans_cs_de_small_finetuned model
Model on translating legal text from Cszech to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Deustch.
### How to use
Here is how to use this model to translate legal text from Cszech to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_de_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Vzhledem k tomu, že tento právní předpis bude přímo použitelný v členských státech a zavede mnoho povinností pro ty, na něž se vztahuje, je žádoucí, aby se jim poskytlo více času na přizpůsobení se těmto novým pravidlům."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_de_small_finetuned | 44.175|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_it_small_finetuned | 4acb51a5e11f30533b9c4f0621863cd54706e7c4 | 2021-06-23T11:35:39.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Cszech Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Italian model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_cs_it_small_finetuned | 7 | null | transformers | 13,958 |
---
language: Cszech Italian
tags:
- translation Cszech Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Členové přítomní při závěrečném hlasování"
---
# legal_t5_small_trans_cs_it_small_finetuned model
Model on translating legal text from Cszech to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Italian.
### How to use
Here is how to use this model to translate legal text from Cszech to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_it_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Členové přítomní při závěrečném hlasování"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_it_small_finetuned | 46.367|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_es | 108abe60bb096bf836fcb69f905165ba875b33f3 | 2021-06-23T09:53:49.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Spanish model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_fr_es | 7 | null | transformers | 13,959 |
---
language: French Spanish
tags:
- translation French Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "commission des libertés civiles, de la justice et des affaires intérieures"
---
# legal_t5_small_trans_fr_es model
Model on translating legal text from French to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Spanish.
### How to use
Here is how to use this model to translate legal text from French to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "commission des libertés civiles, de la justice et des affaires intérieures"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_es | 51.16|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SetFit/deberta-v3-large__sst2__train-8-7 | 08ce0515ff0f280e26145579b774c83f2bb50885 | 2022-02-10T09:52:48.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-8-7 | 7 | null | transformers | 13,960 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-7
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7037
- Accuracy: 0.5008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6864 | 1.0 | 3 | 0.7800 | 0.25 |
| 0.6483 | 2.0 | 6 | 0.8067 | 0.25 |
| 0.6028 | 3.0 | 9 | 0.8500 | 0.25 |
| 0.4086 | 4.0 | 12 | 1.0661 | 0.25 |
| 0.2923 | 5.0 | 15 | 1.2302 | 0.25 |
| 0.2059 | 6.0 | 18 | 1.0312 | 0.5 |
| 0.1238 | 7.0 | 21 | 1.1271 | 0.5 |
| 0.0711 | 8.0 | 24 | 1.3100 | 0.5 |
| 0.0453 | 9.0 | 27 | 1.4208 | 0.5 |
| 0.0198 | 10.0 | 30 | 1.5988 | 0.5 |
| 0.0135 | 11.0 | 33 | 1.9174 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-32-7 | 85fda7fd6f49d4c5d7d5908b633f40ec15d940d8 | 2022-02-10T07:34:38.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__sst2__train-32-7 | 7 | null | transformers | 13,961 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6736
- Accuracy: 0.5931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7094 | 1.0 | 13 | 0.6887 | 0.5385 |
| 0.651 | 2.0 | 26 | 0.6682 | 0.6923 |
| 0.6084 | 3.0 | 39 | 0.6412 | 0.6923 |
| 0.4547 | 4.0 | 52 | 0.6095 | 0.6923 |
| 0.2903 | 5.0 | 65 | 0.6621 | 0.6923 |
| 0.1407 | 6.0 | 78 | 0.7130 | 0.7692 |
| 0.0444 | 7.0 | 91 | 0.9007 | 0.6923 |
| 0.0176 | 8.0 | 104 | 0.9525 | 0.7692 |
| 0.0098 | 9.0 | 117 | 1.0289 | 0.7692 |
| 0.0071 | 10.0 | 130 | 1.0876 | 0.7692 |
| 0.0052 | 11.0 | 143 | 1.1431 | 0.6923 |
| 0.0038 | 12.0 | 156 | 1.1687 | 0.7692 |
| 0.0034 | 13.0 | 169 | 1.1792 | 0.7692 |
| 0.0031 | 14.0 | 182 | 1.2033 | 0.7692 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__subj__train-8-9 | 61259f8e33bf4f39a03a435f76c538312111d97a | 2022-02-09T20:34:07.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__subj__train-8-9 | 7 | null | transformers | 13,962 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4865
- Accuracy: 0.778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7024 | 1.0 | 3 | 0.6843 | 0.75 |
| 0.67 | 2.0 | 6 | 0.6807 | 0.5 |
| 0.6371 | 3.0 | 9 | 0.6677 | 0.5 |
| 0.585 | 4.0 | 12 | 0.6649 | 0.5 |
| 0.5122 | 5.0 | 15 | 0.6707 | 0.5 |
| 0.4379 | 6.0 | 18 | 0.6660 | 0.5 |
| 0.4035 | 7.0 | 21 | 0.6666 | 0.5 |
| 0.323 | 8.0 | 24 | 0.6672 | 0.5 |
| 0.2841 | 9.0 | 27 | 0.6534 | 0.5 |
| 0.21 | 10.0 | 30 | 0.6456 | 0.5 |
| 0.1735 | 11.0 | 33 | 0.6325 | 0.5 |
| 0.133 | 12.0 | 36 | 0.6214 | 0.5 |
| 0.0986 | 13.0 | 39 | 0.6351 | 0.5 |
| 0.081 | 14.0 | 42 | 0.6495 | 0.5 |
| 0.0638 | 15.0 | 45 | 0.6671 | 0.5 |
| 0.0449 | 16.0 | 48 | 0.7156 | 0.5 |
| 0.0399 | 17.0 | 51 | 0.7608 | 0.5 |
| 0.0314 | 18.0 | 54 | 0.7796 | 0.5 |
| 0.0243 | 19.0 | 57 | 0.7789 | 0.5 |
| 0.0227 | 20.0 | 60 | 0.7684 | 0.5 |
| 0.0221 | 21.0 | 63 | 0.7628 | 0.5 |
| 0.0192 | 22.0 | 66 | 0.7728 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
TehranNLP-org/roberta-base-mnli-2e-5-42 | 8261dc70ae6c5f83a03c4f5a02c019d2faa16c70 | 2021-08-28T16:46:05.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | TehranNLP-org | null | TehranNLP-org/roberta-base-mnli-2e-5-42 | 7 | null | transformers | 13,963 | Entry not found |
Tejas3/Xlnet_base_80 | a55193d707454aaaa8429f0198a0d26e394e9331 | 2021-07-20T10:58:08.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
]
| text-classification | false | Tejas3 | null | Tejas3/Xlnet_base_80 | 7 | null | transformers | 13,964 | Entry not found |
The-Programmer-With-Cool-Pens/TifaBotAIPackage | 1ebac03edcdecd0b693e1f8272930ced0c42546c | 2021-08-26T21:50:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | The-Programmer-With-Cool-Pens | null | The-Programmer-With-Cool-Pens/TifaBotAIPackage | 7 | null | transformers | 13,965 | ---
tags:
- conversational
---
# Tifa DialoGPT Model |
TransQuest/monotransquest-da-et_en-wiki | 8397cc9f3b167e28586a724d299ec35159effc88 | 2021-06-03T19:05:32.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"et-en",
"transformers",
"Quality Estimation",
"monotransquest",
"DA",
"license:apache-2.0"
]
| text-classification | false | TransQuest | null | TransQuest/monotransquest-da-et_en-wiki | 7 | null | transformers | 13,966 | ---
language: et-en
tags:
- Quality Estimation
- monotransquest
- DA
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-et_en-wiki", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
TurkuNLP/wikibert-base-ko-cased | e001d153c57ca5096e022336ecee59c8b13dbf4b | 2020-11-09T13:08:15.000Z | [
"pytorch",
"transformers"
]
| null | false | TurkuNLP | null | TurkuNLP/wikibert-base-ko-cased | 7 | null | transformers | 13,967 | Entry not found |
VincentC12/sentiment_analysis_kara | 46f49998c7dc7692fee575a239473688d1859f0d | 2022-03-28T11:52:03.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"sentiment-analysis"
]
| text-classification | false | VincentC12 | null | VincentC12/sentiment_analysis_kara | 7 | null | pytorch | 13,968 | ---
language:
- en
library_name: pytorch
metrics:
- negative
- positive
tags:
- sentiment-analysis
widget:
- text: "Thank you for listening to the recommendations of the telephone team for teleworking. we have a strong expertise in this field and accurate listening to Our management!!!!"
example_title: "Exemple positif"
- text: "working conditions and wages are less than average more part of the time it is not a hierarchical system Our opinion counts"
example_title: "Exemple négatif"
---
Ce modèle est développé pour KARA.
Ce modèle est :
- Un outil d'analyse de sentiment associé à un commentaire de sondage RH
- Entrainé pour être utilisé en ANGLAIS (les commentaires doivent êtres traduits)
- Spécialisé pour des commentaires entre 10 et 512 charactères
Ce modèle n'est pas :
- Utilisable pour détecter un discours haineux ou bien une lettre de suicide
Étiquettes :
- Label_0 = Négatif
- Label_1 = Positif
version 1.1.0
Performances sur le jeux de données du HRM : 91.5% de précision
|
XSY/roberta-scarcasm-discriminator | 99842c76cc913fe0ab3656d32b29ce5b06a04d45 | 2021-11-10T01:02:25.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | XSY | null | XSY/roberta-scarcasm-discriminator | 7 | null | transformers | 13,969 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-scarcasm-discriminator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-scarcasm-discriminator
roberta-base
label0: unsarcasitic
label1: sarcastic
The fine tune method in my github https://github.com/yangyangxusheng/Fine-tune-use-transformers
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1844
- Accuracy: 0.9698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.144 | 1.0 | 2179 | 0.2522 | 0.9215 |
| 0.116 | 2.0 | 4358 | 0.2105 | 0.9530 |
| 0.0689 | 3.0 | 6537 | 0.2015 | 0.9610 |
| 0.028 | 4.0 | 8716 | 0.1844 | 0.9698 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
aware-ai/mobilebert-squadv2 | 5df6060de7963435f05455948ceedf5a2659f8b0 | 2020-06-30T21:58:56.000Z | [
"pytorch",
"mobilebert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | aware-ai | null | aware-ai/mobilebert-squadv2 | 7 | null | transformers | 13,970 | Entry not found |
aXhyra/demo_irony_1234567 | f7930300f76065332ef2c2a8034a3bff7f5820f7 | 2021-12-13T17:57:42.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/demo_irony_1234567 | 7 | null | transformers | 13,971 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: demo_irony_1234567
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: irony
metrics:
- name: F1
type: f1
value: 0.685764300192161
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo_irony_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2905
- F1: 0.6858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.7735294032820418e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 358 | 0.5872 | 0.6786 |
| 0.5869 | 2.0 | 716 | 0.6884 | 0.6952 |
| 0.3417 | 3.0 | 1074 | 0.9824 | 0.6995 |
| 0.3417 | 4.0 | 1432 | 1.2905 | 0.6858 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/demo_sentiment_1234567 | 0f8c0b27c555bf6093eaa16a418a9cc31af3418c | 2021-12-13T23:06:38.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/demo_sentiment_1234567 | 7 | null | transformers | 13,972 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: demo_sentiment_1234567
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: F1
type: f1
value: 0.7113620044371958
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo_sentiment_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6332
- F1: 0.7114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.62486660723695e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7592 | 1.0 | 713 | 0.6509 | 0.6834 |
| 0.6389 | 2.0 | 1426 | 0.6318 | 0.7011 |
| 0.5647 | 3.0 | 2139 | 0.6320 | 0.7041 |
| 0.5391 | 4.0 | 2852 | 0.6332 | 0.7114 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/irony_trained | 192e01d76faab02091f175154ddc9db281474dcd | 2021-12-10T21:49:28.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/irony_trained | 7 | null | transformers | 13,973 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: irony_trained
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: irony
metrics:
- name: F1
type: f1
value: 0.6851011633121422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# irony_trained
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6471
- F1: 0.6851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.6774391860025942e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6589 | 1.0 | 716 | 0.6187 | 0.6646 |
| 0.5494 | 2.0 | 1432 | 0.9314 | 0.6793 |
| 0.3369 | 3.0 | 2148 | 1.3468 | 0.6833 |
| 0.2129 | 4.0 | 2864 | 1.6471 | 0.6851 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/irony_trained_31415 | 3eeb048425817ecb2b5bd78840b3863b536cb62b | 2021-12-12T12:17:08.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/irony_trained_31415 | 7 | null | transformers | 13,974 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: irony_trained_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: irony
metrics:
- name: F1
type: f1
value: 0.6690050628690761
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# irony_trained_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6608
- F1: 0.6690
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.6774391860025942e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 31415
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6547 | 1.0 | 716 | 0.6173 | 0.6508 |
| 0.57 | 2.0 | 1432 | 0.8629 | 0.6577 |
| 0.2955 | 3.0 | 2148 | 1.4836 | 0.6722 |
| 0.1903 | 4.0 | 2864 | 1.6608 | 0.6690 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/presentation_sentiment_1234567 | 2c8389c5ef4393d407337445c4ba1ddc719b35d5 | 2021-12-14T23:23:42.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/presentation_sentiment_1234567 | 7 | null | transformers | 13,975 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: presentation_sentiment_1234567
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: F1
type: f1
value: 0.71829420028644
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_sentiment_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0860
- F1: 0.7183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.2792011721188e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3747 | 1.0 | 11404 | 0.6515 | 0.7045 |
| 0.6511 | 2.0 | 22808 | 0.7334 | 0.7188 |
| 0.0362 | 3.0 | 34212 | 0.9498 | 0.7195 |
| 1.0576 | 4.0 | 45616 | 1.0860 | 0.7183 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/sentiment_trained_31415 | a881ee81970f324305b520d22f3b5092bc02862e | 2021-12-11T21:59:51.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/sentiment_trained_31415 | 7 | null | transformers | 13,976 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: sentiment_trained_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: F1
type: f1
value: 0.7188262432133108
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_trained_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2481
- F1: 0.7188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.2140338797769864e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 31415
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.651 | 1.0 | 11404 | 0.6669 | 0.7141 |
| 0.6066 | 2.0 | 22808 | 0.8160 | 0.7198 |
| 0.503 | 3.0 | 34212 | 1.0659 | 0.7182 |
| 0.386 | 4.0 | 45616 | 1.2481 | 0.7188 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/test_hate_trained_test | 724b69e63faddc78fe4248bf36db38cb7556ccb6 | 2021-12-12T18:11:11.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/test_hate_trained_test | 7 | null | transformers | 13,977 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: test_hate_trained_test
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: hate
metrics:
- name: F1
type: f1
value: 0.7691585677255204
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_hate_trained_test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1807
- F1: 0.7692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.257754679724796e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4362 | 1.0 | 1125 | 0.5282 | 0.7369 |
| 0.3193 | 2.0 | 2250 | 0.6364 | 0.7571 |
| 0.1834 | 3.0 | 3375 | 1.0346 | 0.7625 |
| 0.0776 | 4.0 | 4500 | 1.1807 | 0.7692 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aakashD/t5_paraphrase | 8542ad3d266369b89e3036bfc2fd0e0a9584d892 | 2021-06-23T10:47:55.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | aakashD | null | aakashD/t5_paraphrase | 7 | null | transformers | 13,978 | Entry not found |
ad6398/gupshup_e2e_t5 | 357de122d809d444a509b3ce41d68ce4c6fac461 | 2021-09-07T10:28:59.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | ad6398 | null | ad6398/gupshup_e2e_t5 | 7 | null | transformers | 13,979 | Entry not found |
adamlin/filter | 70b5cbd131b4a8d04824510e77496dff8cb5d248 | 2021-07-09T11:10:43.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer"
]
| text-classification | false | adamlin | null | adamlin/filter | 7 | null | transformers | 13,980 | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model_index:
- name: filter
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# filter
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the GLUE STSB dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.9.0
- Tokenizers 0.10.3
|
adamlin/ml999_grinding_machine | 83cb7611a6f6722c1e26c36104e00d66938e03cd | 2021-12-20T16:49:02.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | adamlin | null | adamlin/ml999_grinding_machine | 7 | null | transformers | 13,981 | Entry not found |
addy88/gpt-neo-netflix | 3292a2e1db612c3cba10982714b3246ba5d6fa54 | 2022-01-02T06:33:26.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | addy88 | null | addy88/gpt-neo-netflix | 7 | null | transformers | 13,982 | Entry not found |
aditeyabaral/finetuned-iitp_pdt_review-bert-hinglish-big | ce653965610776036711e16755681d78d32b83b6 | 2021-11-26T17:45:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | aditeyabaral | null | aditeyabaral/finetuned-iitp_pdt_review-bert-hinglish-big | 7 | null | transformers | 13,983 | Entry not found |
aditeyabaral/finetuned-iitp_pdt_review-distilbert-base-cased | 874c832d4aeeefee3ba878b6427a1c0fd4e34b8d | 2021-11-25T21:16:47.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | aditeyabaral | null | aditeyabaral/finetuned-iitp_pdt_review-distilbert-base-cased | 7 | null | transformers | 13,984 | Entry not found |
aditeyabaral/finetuned-iitp_pdt_review-distilbert-hinglish-big | 0a2192f20c5c20715d816716e45c53c720988f22 | 2021-11-26T18:21:23.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | aditeyabaral | null | aditeyabaral/finetuned-iitp_pdt_review-distilbert-hinglish-big | 7 | null | transformers | 13,985 | Entry not found |
aditeyabaral/finetuned-iitp_pdt_review-roberta-hinglish-small | 141b55c5c873d563b4009ec9240c3a3ce8bfe073 | 2021-11-26T17:13:08.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | aditeyabaral | null | aditeyabaral/finetuned-iitp_pdt_review-roberta-hinglish-small | 7 | null | transformers | 13,986 | Entry not found |
aditeyabaral/finetuned-sail2017-indic-bert | ccf3769c8e6e7b3fcec0134d77bd070a30073b9b | 2021-11-14T15:38:52.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | aditeyabaral | null | aditeyabaral/finetuned-sail2017-indic-bert | 7 | null | transformers | 13,987 | Entry not found |
agiagoulas/bert-pss | b64b15110bf9f3be03c2ed2074dcb6905c88a755 | 2021-05-18T23:16:17.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | agiagoulas | null | agiagoulas/bert-pss | 7 | null | transformers | 13,988 | bert-base-uncased model trained on the tobacco800 dataset for the task of page-stream-segmentation.
[Link](https://github.com/agiagoulas/page-stream-segmentation) to the GitHub Repo with the model implementation. |
airKlizz/bert2bert-multi-en-wiki-news | 391ef9b35449f0b5924630f1f29584d815556dd7 | 2020-08-11T09:05:53.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | airKlizz | null | airKlizz/bert2bert-multi-en-wiki-news | 7 | null | transformers | 13,989 | Entry not found |
airKlizz/t5-base-with-title-multi-de-wiki-news | da1bdc10e8f796aa8c84af9e7bf00f0f3fa85e78 | 2021-06-23T10:57:20.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | airKlizz | null | airKlizz/t5-base-with-title-multi-de-wiki-news | 7 | null | transformers | 13,990 | Entry not found |
akahana/tiny-roberta-indonesia | 19b3c82c80c1afb924fdfe80055a467ec953bf6d | 2021-11-25T03:14:55.000Z | [
"pytorch",
"tf",
"roberta",
"feature-extraction",
"id",
"dataset:wikipedia",
"transformers",
"tiny-roberta-indonesia",
"license:mit"
]
| feature-extraction | false | akahana | null | akahana/tiny-roberta-indonesia | 7 | null | transformers | 13,991 | ---
language: id
tags:
- tiny-roberta-indonesia
license: mit
datasets:
- wikipedia
widget:
- text: "ikiryo adalah <mask> hantu dalam mitologi jepang."
---
# Indonesian tiny-RoBERTa
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "akahana/tiny-roberta-indonesia"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("ikiryo adalah <mask> hantu dalam mitologi jepang.")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "akahana/tiny-roberta-indonesia"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "ikiryo adalah <mask> hantu dalam mitologi jepang."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
``` |
alecmullen/autonlp-group-classification-441411446 | 10930e05afceeefb7dd83c7f8a787a6288faffcd | 2021-12-22T23:03:27.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:alecmullen/autonlp-data-group-classification",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | alecmullen | null | alecmullen/autonlp-group-classification-441411446 | 7 | null | transformers | 13,992 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- alecmullen/autonlp-data-group-classification
co2_eq_emissions: 0.4362732160754736
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 441411446
- CO2 Emissions (in grams): 0.4362732160754736
## Validation Metrics
- Loss: 0.7598486542701721
- Accuracy: 0.8222222222222222
- Macro F1: 0.2912091747693842
- Micro F1: 0.8222222222222222
- Weighted F1: 0.7707160863181806
- Macro Precision: 0.29631463146314635
- Micro Precision: 0.8222222222222222
- Weighted Precision: 0.7341339689524508
- Macro Recall: 0.30174603174603176
- Micro Recall: 0.8222222222222222
- Weighted Recall: 0.8222222222222222
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alecmullen/autonlp-group-classification-441411446
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("alecmullen/autonlp-group-classification-441411446", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("alecmullen/autonlp-group-classification-441411446", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
alex6095/SanctiMolyTopic | b1a6e28ff4225b99ddaf398d89382def32221572 | 2021-12-12T11:29:14.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | alex6095 | null | alex6095/SanctiMolyTopic | 7 | null | transformers | 13,993 | Entry not found |
alina1997/MarianMT | 0b2bfdc183d3531eeb00afa6cbd809ba269e4c3b | 2021-11-16T16:11:44.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"de",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | alina1997 | null | alina1997/MarianMT | 7 | null | transformers | 13,994 | ---
language:
- en
- de
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: model_output_en_de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_output_en_de
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1298
- Bleu: 33.9121
- Gen Len: 76.8132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
alireza7/ARMAN-SH-persian-base-tebyan | 47d6cb42b452cba0097402fa1acec18deb254968 | 2021-09-29T19:19:24.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | alireza7 | null | alireza7/ARMAN-SH-persian-base-tebyan | 7 | null | transformers | 13,995 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
allenai/dsp_roberta_base_dapt_biomed_tapt_rct_180K | 44a2f5efe5dab74a4ff1339f9d46bfe98b4ac42a | 2021-05-20T13:05:35.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| null | false | allenai | null | allenai/dsp_roberta_base_dapt_biomed_tapt_rct_180K | 7 | null | transformers | 13,996 | Entry not found |
allenai/dsp_roberta_base_tapt_imdb_20000 | 34e962984e14e6ba18fab0d9881712a6f080a593 | 2021-05-20T13:29:14.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| null | false | allenai | null | allenai/dsp_roberta_base_tapt_imdb_20000 | 7 | null | transformers | 13,997 | Entry not found |
aloxatel/3RH | a81159b781fc92bfc78dda368b39df9c74bb3fbb | 2021-05-20T13:41:07.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | aloxatel | null | aloxatel/3RH | 7 | null | transformers | 13,998 | Entry not found |
aloxatel/9WT | 12654fb4333d105302a9caf84375b81a10b48f07 | 2021-05-18T23:30:14.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | aloxatel | null | aloxatel/9WT | 7 | null | transformers | 13,999 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.