modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
NaliniK/distilbert-base-uncased-finetuned-cola | f7126fddee9250ae8d1c61a5372df687633689c3 | 2021-12-03T17:21:08.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | NaliniK | null | NaliniK/distilbert-base-uncased-finetuned-cola | 5 | null | transformers | 16,200 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5494735380761103
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8239
- Matthews Correlation: 0.5495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5235 | 1.0 | 535 | 0.5402 | 0.4156 |
| 0.3484 | 2.0 | 1070 | 0.5272 | 0.5233 |
| 0.2381 | 3.0 | 1605 | 0.6665 | 0.5050 |
| 0.1746 | 4.0 | 2140 | 0.7512 | 0.5429 |
| 0.1308 | 5.0 | 2675 | 0.8239 | 0.5495 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
NanniKirby/bapismall | 910d5da2a39e6b4183c8e0708a7bb6246a0841de | 2021-09-26T22:00:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | NanniKirby | null | NanniKirby/bapismall | 5 | null | transformers | 16,201 | ---
tags:
- conversational
---
# Bapibot |
NbAiLab/nb-bert-base-samisk | 087648d16ad0256b8e1c805813784dea5c003b54 | 2022-02-16T15:43:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
]
| text-classification | false | NbAiLab | null | NbAiLab/nb-bert-base-samisk | 5 | null | transformers | 16,202 | ---
license: apache-2.0
---
|
NbAiLab/nb-roberta-base-scandinavian | c581921c5163fa42bdaf577b1b5c382557f058d4 | 2021-11-29T12:08:45.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"no",
"transformers",
"norwegian",
"license:cc-by-4.0",
"autotrain_compatible"
]
| fill-mask | false | NbAiLab | null | NbAiLab/nb-roberta-base-scandinavian | 5 | null | transformers | 16,203 | ---
language: no
license: cc-by-4.0
tags:
- norwegian
- roberta
pipeline_tag: fill-mask
widget:
- text: På biblioteket kan du <mask> en bok.
- text: Dette er et <mask> eksempel.
- text: Av og til kan en språkmodell gi et <mask> resultat.
- text: Som ansat får du <mask> for at bidrage til borgernes adgang til dansk kulturarv, til forskning og til samfundets demokratiske udvikling.
---
# This is just a Test Model. Do NOT use for anything!
Continued pretrained from the nb-roberta-base.
The domain specific pretraining is done on the 102GB (Scandinavian corpus)[https://huggingface.co/datasets/NbAiLab/scandinavian].
## Train for 180k steps for 128 sequences:
```bash
./run_mlm_flax_stream.py \
--output_dir="./" \
--model_type="roberta" \
--config_name="./" \
--tokenizer_name="./" \
--model_name_or_path="./" \
--dataset_name="NbAiLab/scandinavian" \
--max_seq_length="128" \
--weight_decay="0.01" \
--per_device_train_batch_size="128" \
--per_device_eval_batch_size="128" \
--learning_rate="6e-5" \
--warmup_steps="5000" \
--overwrite_output_dir \
--cache_dir /mnt/disks/flaxdisk/cache/ \
--num_train_steps="180000" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="10000" \
--save_steps="10000" \
--eval_steps="10000" \
--preprocessing_num_workers 96 \
--auth_token True \
--adafactor \
--push_to_hub
```
## Train for 20k steps for 512 sequences:
```bash
./run_mlm_flax_stream.py \
--output_dir="./" \
--model_type="roberta" \
--config_name="./" \
--tokenizer_name="./" \
--model_name_or_path="./" \
--dataset_name="NbAiLab/scandinavian" \
--max_seq_length="512" \
--weight_decay="0.01" \
--per_device_train_batch_size="48" \
--per_device_eval_batch_size="48" \
--learning_rate="3e-5" \
--warmup_steps="5000" \
--overwrite_output_dir \
--cache_dir /mnt/disks/flaxdisk/cache/ \
--num_train_steps="20000" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="20000" \
--save_steps="10000" \
--eval_steps="10000" \
--preprocessing_num_workers 96 \
--auth_token True \
--adafactor \
--push_to_hub
```
Approximate additional training time: 1 week.
|
NimaBoscarino/aot-gan-celebahq | 8394b1abc3ac29c92f321cc884c8a98240ca4787 | 2022-01-25T08:38:46.000Z | [
"pytorch",
"dataset:celeba-hq",
"transformers",
"face-recognition",
"face-generation",
"face-segmentation",
"generative-adversarial-network"
]
| null | false | NimaBoscarino | null | NimaBoscarino/aot-gan-celebahq | 5 | null | transformers | 16,204 | ---
tags:
- face-recognition
- face-generation
- face-segmentation
- generative-adversarial-network
metrics:
- L1
- PSNR
- SSIM
- FID
datasets:
- celeba-hq
---
# AOT-GAN CelebA-HQ
AOT-GAN is a model that can be used for image in-painting. The CelebA-HQ checkpoint is trained on synthetic human faces, which should make it suitable for touching up and restoring portraits.
This model was generated using [AOT-GAN-for-Inpainting](https://github.com/researchmm/AOT-GAN-for-Inpainting), cited as
```
@inproceedings{yan2021agg,
author = {Zeng, Yanhong and Fu, Jianlong and Chao, Hongyang and Guo, Baining},
title = {Aggregated Contextual Transformations for High-Resolution Image Inpainting},
booktitle = {Arxiv},
pages={-},
year = {2020}
}
```
## Dataset
The CelebA-HQ dataset was created with this codebase: https://github.com/tkarras/progressive_growing_of_gans, owned by NVidia and licensed under Creative Commons Attribution-NonCommercial 4.0 International. |
Norod78/english-sienfeld-distilgpt2 | d33653417800d63a4e936bf8fcbfa7069c96fd66 | 2021-05-21T10:58:11.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Norod78 | null | Norod78/english-sienfeld-distilgpt2 | 5 | null | transformers | 16,205 | Entry not found |
Norod78/hebrew-project_ben_yehuda-gpt_neo-small | fecba22315ec10cbebc2dd13649b7ed13f31eadc | 2022-07-04T07:28:02.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"he",
"transformers",
"license:mit"
]
| text-generation | false | Norod78 | null | Norod78/hebrew-project_ben_yehuda-gpt_neo-small | 5 | null | transformers | 16,206 | ---
language: he
thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
widget:
- text: "יום אחד, "
- text: "זה עתה התעורר"
- text: "וזה הצחוק, אמרו חכ"
- text: "אשה צעי"
license: mit
---
# hebrew-project_ben_yehuda-gpt_neo-small
Hebrew story text generation model, in the style of the texts available in [Project Ben Yehuda](https://benyehuda.org/) fined tuned upon [hebrew-gpt_neo-small](https://huggingface.co/Norod78/hebrew-gpt_neo-small) which was trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo).
## Dataset
Stripped text dump from [project ben-yehuda public_domain_dump 2021-02](https://github.com/projectbenyehuda/public_domain_dump/releases/tag/2021-02)
|
Nuwaisir/Quran_speech_recognizer | 0c21ef1cde1202b91e1fdd8d4c25aaef3065a0be | 2022-02-21T12:39:51.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | Nuwaisir | null | Nuwaisir/Quran_speech_recognizer | 5 | null | transformers | 16,207 | Entry not found |
Ogayo/Hel-ach-en | fadb45e9ce9cd667866b878245be08153f762362 | 2020-12-11T21:30:01.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ach",
"en",
"dataset:JW300",
"transformers",
"translation",
"license:cc-by-4.0",
"autotrain_compatible"
]
| translation | false | Ogayo | null | Ogayo/Hel-ach-en | 5 | null | transformers | 16,208 | ---
language:
- ach
- en
tags:
- translation
license: cc-by-4.0
datasets:
- JW300
metrics:
- bleu
---
# HEL-ACH-EN
## Model description
MT model translating Acholi to English initialized with weights from [opus-mt-luo-en](https://huggingface.co/Helsinki-NLP/opus-mt-luo-en) on HuggingFace.
## Intended uses & limitations
Machine Translation experiments. Do not use for sensitive tasks.
#### How to use
```python
# You can include sample code which will be formatted
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Ogayo/Hel-ach-en")
model = AutoModelForSeq2SeqLM.from_pretrained("Ogayo/Hel-ach-en")
```
#### Limitations and bias
Trained on Jehovah Witnesses data so contains theirs and Christian views.
## Training data
Trained on OPUS JW300 data.
Initialized with weights from [opus-mt-luo-en](https://huggingface.co/Helsinki-NLP/opus-mt-luo-en?text=Bed+gi+nyasi+mar+chieng%27+nyuol+mopong%27+gi+mor%21#model_card)
## Training procedure
Remove duplicates and rows with no alphabetic characters. Used GPU
## Eval results
testset | BLEU
--- | ---
JW300.luo.en| 46.1
|
Omar2027/Author_identification | c305f6f7abfc209d5473e9ba6278ba34ab926a4b | 2021-12-26T20:58:45.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Omar2027 | null | Omar2027/Author_identification | 5 | null | transformers | 16,209 | |
PedroR/xlm-roberta-4-pretrained-with-tokenizer | 3431260c2a571657d7729f53add3db743bdb451c | 2021-07-29T17:06:32.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | PedroR | null | PedroR/xlm-roberta-4-pretrained-with-tokenizer | 5 | null | transformers | 16,210 | Entry not found |
PedroR/xlm-roberta-6 | 5c61eb6405d1e958537a37fbcafb2033f646db6a | 2021-07-27T22:03:13.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | PedroR | null | PedroR/xlm-roberta-6 | 5 | null | transformers | 16,211 | Entry not found |
Peter/in_g_2 | 00fd7c3dbe8b0019864eeeca0cb60ed96787a859 | 2021-07-22T18:58:40.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | Peter | null | Peter/in_g_2 | 5 | null | transformers | 16,212 | Entry not found |
Plim/xls-r-300m-fr | 8be1703f53d4715450cae8905af598076985ffaf | 2022-03-24T11:57:31.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Plim | null | Plim/xls-r-300m-fr | 5 | null | transformers | 16,213 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - French
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fr
metrics:
- name: Test WER
type: wer
value: 24.56
- name: Test CER
type: cer
value: 7.3
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: fr
metrics:
- name: Test WER
type: wer
value: 63.62
- name: Test CER
type: cer
value: 17.2
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: fr
metrics:
- name: Test WER
type: wer
value: 66.45
---
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Model description
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.495 | 0.16 | 500 | 3.3883 | 1.0 |
| 2.9095 | 0.32 | 1000 | 2.9152 | 1.0000 |
| 1.8434 | 0.49 | 1500 | 1.0473 | 0.7446 |
| 1.4298 | 0.65 | 2000 | 0.5729 | 0.5130 |
| 1.1937 | 0.81 | 2500 | 0.3795 | 0.3450 |
| 1.1248 | 0.97 | 3000 | 0.3321 | 0.3052 |
| 1.0835 | 1.13 | 3500 | 0.3038 | 0.2805 |
| 1.0479 | 1.3 | 4000 | 0.2910 | 0.2689 |
| 1.0413 | 1.46 | 4500 | 0.2798 | 0.2593 |
| 1.014 | 1.62 | 5000 | 0.2727 | 0.2512 |
| 1.004 | 1.78 | 5500 | 0.2646 | 0.2471 |
| 0.9949 | 1.94 | 6000 | 0.2619 | 0.2457 |
It achieves the best result on STEP 6000 on the validation set:
- Loss: 0.2619
- Wer: 0.2457
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7` with split `test`
```bash
python eval.py --model_id Plim/xls-r-300m-fr --dataset mozilla-foundation/common_voice_7_0 --config fr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id Plim/xls-r-300m-fr --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
|
Pollawat/mt5-small-thai-qg | 185de717a3062ae7847ce85802f89018d93e0147 | 2021-06-23T14:57:30.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"thai",
"th",
"dataset:NSC2018",
"transformers",
"question-generation",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | Pollawat | null | Pollawat/mt5-small-thai-qg | 5 | null | transformers | 16,214 | ---
tags:
- question-generation
language:
- thai
- th
datasets:
- NSC2018
license: mit
---
[Google's mT5](https://github.com/google-research/multilingual-t5)
This is a model for generating questions from Thai texts. It was fine-tuned on NSC2018 corpus
```python
from transformers import T5Tokenizer, MT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Pollawat/mt5-small-thai-qg")
model = MT5ForConditionalGeneration.from_pretrained("Pollawat/mt5-small-thai-qg")
text = "กรุงเทพมหานคร เป็นเมืองหลวงและนครที่มีประชากรมากที่สุดของประเทศไทย เป็นศูนย์กลางการปกครอง การศึกษา การคมนาคมขนส่ง การเงินการธนาคาร การพาณิชย์ การสื่อสาร และความเจริญของประเทศ เป็นเมืองที่มีชื่อยาวที่สุดในโลก ตั้งอยู่บนสามเหลี่ยมปากแม่น้ำเจ้าพระยา มีแม่น้ำเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งพระนครและฝั่งธนบุรี กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. มีประชากรตามทะเบียนราษฎรกว่า 5 ล้านคน ทำให้กรุงเทพมหานครเป็นเอกนคร (Primate City) จัด มีผู้กล่าวว่า กรุงเทพมหานครเป็น 'เอกนครที่สุดในโลก' เพราะมีประชากรมากกว่านครที่มีประชากรมากเป็นอันดับ 2 ถึง 40 เท่า[3]"
input_ids = tokenizer.encode(text, return_tensors='pt')
beam_output = model.generate(
input_ids,
max_length=50,
num_beams=5,
early_stopping=True
)
print(tokenizer.decode(beam_output[0], skip_special_tokens=True))
>> <extra_id_0>ของกรุงเทพมหานครเป็นเมืองหลวงของประเทศใด
``` |
Poly-Pixel/shrek-medium | f95caac8f776b6f231bebb602e942ae5d59a418f | 2021-08-30T21:16:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Poly-Pixel | null | Poly-Pixel/shrek-medium | 5 | null | transformers | 16,215 | ---
tags:
- conversational
---
Shrek |
Pratibha/xlm-roberta-base-finetuned-marc-en | 740773d8020781d00b06d321a01ef2730e660f00 | 2021-10-22T15:22:30.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | Pratibha | null | Pratibha/xlm-roberta-base-finetuned-marc-en | 5 | null | transformers | 16,216 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9575
- Mae: 0.5488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1253 | 1.0 | 235 | 0.9960 | 0.5366 |
| 0.9708 | 2.0 | 470 | 0.9575 | 0.5488 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Prompsit/paraphrase-roberta-es | fc56355d720d89e0f6dbdc639f58306513bda56e | 2021-12-23T12:07:06.000Z | [
"pytorch",
"roberta",
"text-classification",
"es",
"transformers"
]
| text-classification | false | Prompsit | null | Prompsit/paraphrase-roberta-es | 5 | 2 | transformers | 16,217 | ---
pipeline_tag: text-classification
inference: false
language: es
tags:
- transformers
---
# Prompsit/paraphrase-roberta-es
This model allows to evaluate paraphrases for a given phrase.
We have fine-tuned this model from pretrained "PlanTL-GOB-ES/roberta-base-bne".
Model built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.
# How to use it
The model answer the following question: Is "phrase B" a paraphrase of "phrase A".
Please note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.
Resulting probabilities correspond to classes:
* 0: Not a paraphrase
* 1: It's a paraphrase
So, considering the phrase "se buscarán acuerdos" and a candidate paraphrase like "se deberá obtener el acuerdo", you can use the model like this:
```
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Prompsit/paraphrase-roberta-es")
model = AutoModelForSequenceClassification.from_pretrained("Prompsit/paraphrase-roberta-es")
input = tokenizer('se buscarán acuerdos','se deberá obtener el acuerdo',return_tensors='pt')
logits = model(**input).logits
soft = torch.nn.Softmax(dim=1)
print(soft(logits))
```
Code output is:
```
tensor([[0.2266, 0.7734]], grad_fn=<SoftmaxBackward>)
```
As the probability of 1 (=It's a paraphrase) is 0.77 and the probability of 0 (=It is not a paraphrase) is 0.22, we can conclude, for our previous example, that "se deberá obtener el acuerdo" is a paraphrase of "se buscarán acuerdos".
# Evaluation results
We have used as test dataset 16500 pairs of phrases human tagged.
Metrics obtained are:
```
metrics={
'test_loss': 0.4869941473007202,
'test_accuracy': 0.8003636363636364,
'test_precision': 0.6692456479690522,
'test_recall': 0.5896889646357052,
'test_f1': 0.6269535673839184,
'test_matthews_correlation': 0.49324489316659575,
'test_runtime': 27.1537,
'test_samples_per_second': 607.652,
'test_steps_per_second': 19.003
}
``` |
PubChimps/dl-bert | b38ffe5d428543493d3636b57d67a6c929988ed1 | 2021-05-20T12:18:03.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | PubChimps | null | PubChimps/dl-bert | 5 | null | transformers | 16,218 | Entry not found |
Pyke/1 | 668d19426d63b4648cac3c7995239814e2f991e0 | 2021-08-22T13:16:01.000Z | [
"pytorch",
"bart",
"feature-extraction",
"transformers"
]
| feature-extraction | false | Pyke | null | Pyke/1 | 5 | null | transformers | 16,219 | Entry not found |
Pyke/DS-config-22 | 2fae762ce010951fda69cc4b9ab2597380e9795a | 2021-08-23T16:30:59.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Pyke | null | Pyke/DS-config-22 | 5 | null | transformers | 16,220 | Entry not found |
RASMUS/wav2vec2-xlsr-fi-train-aug-lm-1B | 5ae7d70afb956d960da5ad7597b175b1c88f60bf | 2022-03-24T11:53:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"mozilla-foundation/common_voice_7_0",
"audio",
"speech",
"robust-speech-event",
"hf-asr-leaderboard",
"model-index"
]
| automatic-speech-recognition | false | RASMUS | null | RASMUS/wav2vec2-xlsr-fi-train-aug-lm-1B | 5 | null | transformers | 16,221 | ---
language: fi
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
- cer
tags:
- generated_from_trainer
- mozilla-foundation/common_voice_7_0
- audio
- automatic-speech-recognition
- speech
- robust-speech-event
- hf-asr-leaderboard
model-index:
- name: XLS-R 1B Wav2Vec2 Finnish by Rasmus Toivanen
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 10.96
- name: Test CER
type: cer
value: 2.81
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-fi-train-aug-lm-1B
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1499
- Wer: 0.1955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6473 | 0.29 | 400 | 0.2857 | 0.3825 |
| 0.6039 | 0.58 | 800 | 0.2459 | 0.3476 |
| 0.4757 | 0.87 | 1200 | 0.2338 | 0.3274 |
| 0.4473 | 1.15 | 1600 | 0.2246 | 0.3128 |
| 0.4322 | 1.44 | 2000 | 0.1962 | 0.2805 |
| 0.3961 | 1.73 | 2400 | 0.2070 | 0.2797 |
| 0.3642 | 2.02 | 2800 | 0.1790 | 0.2473 |
| 0.3561 | 2.31 | 3200 | 0.1769 | 0.2375 |
| 0.282 | 2.6 | 3600 | 0.1672 | 0.2263 |
| 0.2978 | 2.89 | 4000 | 0.1636 | 0.2192 |
| 0.2722 | 3.17 | 4400 | 0.1637 | 0.2102 |
| 0.2924 | 3.46 | 4800 | 0.1506 | 0.2021 |
| 0.2631 | 3.75 | 5200 | 0.1499 | 0.1955 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
REZERO/DialoGPT-medium-saitama | ecc66974f805ff1972819dfb72e56e8e00ac57c5 | 2021-09-13T17:55:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | REZERO | null | REZERO/DialoGPT-medium-saitama | 5 | null | transformers | 16,222 | ---
tags:
- conversational
---
# Saitama DialoGPT Model |
RameshArvind/roberta_long_answer_nq | 35bb2707b720f4dd6e9b1abb74f3ac784b5d3fe8 | 2021-05-20T12:20:29.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | RameshArvind | null | RameshArvind/roberta_long_answer_nq | 5 | null | transformers | 16,223 | Entry not found |
Raychanan/chinese-roberta-wwm-ext-FineTuned-Binary | 4f36ebddd198b633f73fe6bb49b8b828779f3e77 | 2021-05-18T21:56:59.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Raychanan | null | Raychanan/chinese-roberta-wwm-ext-FineTuned-Binary | 5 | null | transformers | 16,224 | DO NOT USE THIS |
Riad/finetuned-bert-mrpc | cd9ee0acb9406eac0439fb375f6b507ad3e9ed69 | 2021-09-15T12:45:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Riad | null | Riad/finetuned-bert-mrpc | 5 | null | transformers | 16,225 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: finetuned-bert-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8676470588235294
- name: F1
type: f1
value: 0.9084745762711864
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4382
- Accuracy: 0.8676
- F1: 0.9085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5454 | 1.0 | 230 | 0.4396 | 0.8309 | 0.8871 |
| 0.3387 | 2.0 | 460 | 0.3783 | 0.8529 | 0.8976 |
| 0.1956 | 3.0 | 690 | 0.4382 | 0.8676 | 0.9085 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Ritvik/nlp_model_mini | 0a4db4ebf484175a19f5d530a9f24fe5a71c1a2d | 2021-10-21T21:10:11.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Ritvik | null | Ritvik/nlp_model_mini | 5 | null | transformers | 16,226 | Entry not found |
RonnieTheCat/QG-System | fe4792c03a4265cfd81c58f155aa3b94e52af843 | 2021-08-31T15:58:31.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | RonnieTheCat | null | RonnieTheCat/QG-System | 5 | null | transformers | 16,227 | Entry not found |
Ruizhou/bert-base-uncased-finetuned-mrpc | 9b817487ca23af56207e0cdca7f1a86f50b87756 | 2021-10-03T07:50:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Ruizhou | null | Ruizhou/bert-base-uncased-finetuned-mrpc | 5 | null | transformers | 16,228 | Entry not found |
RuudVelo/wav2vec2-large-xls-r-300m-nl | e81607e997f53018ff6c5fe14356437ceafe1f32 | 2022-03-23T18:29:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | RuudVelo | null | RuudVelo/wav2vec2-large-xls-r-300m-nl | 5 | 1 | transformers | 16,229 | ---
language:
- nl
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- nl
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-nl
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
args: nl
metrics:
- name: Test WER
type: wer
value: 17.17
- name: Test CER
type: cer
value: 5.13
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Test WER
type: wer
value: 35.76
- name: Test CER
type: cer
value: 13.99
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: nl
metrics:
- name: Test WER
type: wer
value: 37.19
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-nl
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the test set:
- Loss: 0.3923
- Wer: 0.1748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.5787 | 0.89 | 400 | 0.6354 | 0.5643 |
| 0.3036 | 1.78 | 800 | 0.3690 | 0.3552 |
| 0.188 | 2.67 | 1200 | 0.3239 | 0.2958 |
| 0.1434 | 3.56 | 1600 | 0.3093 | 0.2515 |
| 0.1245 | 4.44 | 2000 | 0.3024 | 0.2433 |
| 0.1095 | 5.33 | 2400 | 0.3249 | 0.2643 |
| 0.0979 | 6.22 | 2800 | 0.3191 | 0.2281 |
| 0.0915 | 7.11 | 3200 | 0.3152 | 0.2216 |
| 0.0829 | 8.0 | 3600 | 0.3419 | 0.2218 |
| 0.0777 | 8.89 | 4000 | 0.3432 | 0.2132 |
| 0.073 | 9.78 | 4400 | 0.3223 | 0.2131 |
| 0.0688 | 10.67 | 4800 | 0.3094 | 0.2152 |
| 0.0647 | 11.56 | 5200 | 0.3411 | 0.2152 |
| 0.0639 | 12.44 | 5600 | 0.3762 | 0.2135 |
| 0.0599 | 13.33 | 6000 | 0.3790 | 0.2137 |
| 0.0572 | 14.22 | 6400 | 0.3693 | 0.2118 |
| 0.0563 | 15.11 | 6800 | 0.3495 | 0.2139 |
| 0.0521 | 16.0 | 7200 | 0.3800 | 0.2023 |
| 0.0508 | 16.89 | 7600 | 0.3678 | 0.2033 |
| 0.0513 | 17.78 | 8000 | 0.3845 | 0.1987 |
| 0.0476 | 18.67 | 8400 | 0.3511 | 0.2037 |
| 0.045 | 19.56 | 8800 | 0.3794 | 0.1994 |
| 0.044 | 20.44 | 9200 | 0.3525 | 0.2050 |
| 0.043 | 21.33 | 9600 | 0.4082 | 0.2007 |
| 0.0409 | 22.22 | 10000 | 0.3866 | 0.2004 |
| 0.0393 | 23.11 | 10400 | 0.3899 | 0.2008 |
| 0.0382 | 24.0 | 10800 | 0.3626 | 0.1951 |
| 0.039 | 24.89 | 11200 | 0.3936 | 0.1953 |
| 0.0361 | 25.78 | 11600 | 0.4262 | 0.1928 |
| 0.0362 | 26.67 | 12000 | 0.3796 | 0.1934 |
| 0.033 | 27.56 | 12400 | 0.3616 | 0.1934 |
| 0.0321 | 28.44 | 12800 | 0.3742 | 0.1933 |
| 0.0325 | 29.33 | 13200 | 0.3582 | 0.1869 |
| 0.0309 | 30.22 | 13600 | 0.3717 | 0.1874 |
| 0.029 | 31.11 | 14000 | 0.3814 | 0.1894 |
| 0.0296 | 32.0 | 14400 | 0.3698 | 0.1877 |
| 0.0281 | 32.89 | 14800 | 0.3976 | 0.1899 |
| 0.0275 | 33.78 | 15200 | 0.3854 | 0.1858 |
| 0.0264 | 34.67 | 15600 | 0.4021 | 0.1889 |
| 0.0261 | 35.56 | 16000 | 0.3850 | 0.1830 |
| 0.0242 | 36.44 | 16400 | 0.4091 | 0.1878 |
| 0.0245 | 37.33 | 16800 | 0.4012 | 0.1846 |
| 0.0243 | 38.22 | 17200 | 0.3996 | 0.1833 |
| 0.0223 | 39.11 | 17600 | 0.3962 | 0.1815 |
| 0.0223 | 40.0 | 18000 | 0.3898 | 0.1832 |
| 0.0219 | 40.89 | 18400 | 0.4019 | 0.1822 |
| 0.0211 | 41.78 | 18800 | 0.4035 | 0.1809 |
| 0.021 | 42.67 | 19200 | 0.3915 | 0.1826 |
| 0.0208 | 43.56 | 19600 | 0.3934 | 0.1784 |
| 0.0188 | 44.44 | 20000 | 0.3912 | 0.1787 |
| 0.0195 | 45.33 | 20400 | 0.3989 | 0.1766 |
| 0.0186 | 46.22 | 20800 | 0.3887 | 0.1773 |
| 0.0188 | 47.11 | 21200 | 0.3982 | 0.1758 |
| 0.0175 | 48.0 | 21600 | 0.3933 | 0.1755 |
| 0.0172 | 48.89 | 22000 | 0.3921 | 0.1749 |
| 0.0187 | 49.78 | 22400 | 0.3923 | 0.1748 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
SEBIS/code_trans_t5_base_code_comment_generation_java_multitask_finetune | be12bbcaac6d335ce826181a0913ab4ca4af2057 | 2021-06-23T04:08:25.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_comment_generation_java_multitask_finetune | 5 | null | transformers | 16,230 | ---
tags:
- summarization
widget:
- text: "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
---
# CodeTrans model for code comment generation java
Pretrained model on programming language java using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code comment generation task for the java function/method.
## Intended uses & limitations
The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_comment_generation_java_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_comment_generation_java_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/code%20comment%20generation/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 60,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 37.98 |
| CodeTrans-ST-Base | 38.07 |
| CodeTrans-TF-Small | 38.56 |
| CodeTrans-TF-Base | 39.06 |
| CodeTrans-TF-Large | **39.50** |
| CodeTrans-MT-Small | 20.15 |
| CodeTrans-MT-Base | 27.44 |
| CodeTrans-MT-Large | 34.69 |
| CodeTrans-MT-TF-Small | 38.37 |
| CodeTrans-MT-TF-Base | 38.90 |
| CodeTrans-MT-TF-Large | 39.25 |
| State of the art | 38.17 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_code_documentation_generation_php_multitask | d6a07bcb4ea7fcb52482821774b9fb835da3860c | 2021-06-23T04:36:53.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_documentation_generation_php_multitask | 5 | null | transformers | 16,231 | ---
tags:
- summarization
widget:
- text: "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
---
# CodeTrans model for code documentation generation php
Pretrained model on programming language php using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_php_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_php_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/php/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 360,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_code_documentation_generation_ruby_transfer_learning_finetune | 7a01c81ba90047da6cefbb705325cc63221453bb | 2021-06-23T04:55:29.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_documentation_generation_ruby_transfer_learning_finetune | 5 | null | transformers | 16,232 | ---
tags:
- summarization
widget:
- text: "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
---
# CodeTrans model for code documentation generation ruby
Pretrained model on programming language ruby using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the ruby function/method.
## Intended uses & limitations
The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_ruby_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_ruby_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/ruby/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_program_synthese_multitask | b5baf27b3497355bd3f24c9fd08ba4c9f051e520 | 2021-06-23T05:07:00.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_program_synthese_multitask | 5 | null | transformers | 16,233 | ---
tags:
- summarization
widget:
- text: "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
---
# CodeTrans model for program synthesis
Pretrained model on programming language lisp inspired DSL using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate lisp inspired DSL code given the human language description tasks.
### How to use
Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_program_synthese_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_program_synthese_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/program%20synthesis/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 360,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | LISP |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 89.43 |
| CodeTrans-ST-Base | 89.65 |
| CodeTrans-TF-Small | 90.30 |
| CodeTrans-TF-Base | 90.24 |
| CodeTrans-TF-Large | 90.21 |
| CodeTrans-MT-Small | 82.88 |
| CodeTrans-MT-Base | 86.99 |
| CodeTrans-MT-Large | 90.27 |
| CodeTrans-MT-TF-Small | **90.31** |
| CodeTrans-MT-TF-Base | 90.30 |
| CodeTrans-MT-TF-Large | 90.17 |
| State of the art | 85.80 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask_finetune | 8199a3a938e68ec5c2a5275fe67343c0e26c412d | 2021-06-23T05:32:32.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask_finetune | 5 | null | transformers | 16,234 | ---
tags:
- summarization
widget:
- text: "select time ( col0 ) from tab0"
---
# CodeTrans model for source code summarization sql
Pretrained model on programming language sql using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the sql code snippets.
## Intended uses & limitations
The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "select time ( col0 ) from tab0"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/sql/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_transfer_learning_pretrain | d8def2832ca81ee62e0168f14ca81bf6e31220e8 | 2021-06-23T05:35:59.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers"
]
| feature-extraction | false | SEBIS | null | SEBIS/code_trans_t5_base_transfer_learning_pretrain | 5 | null | transformers | 16,235 | # CodeTrans transfer learning pre-trained model
Pretrained model on programming languages using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain.
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
It could be used to fine-tune other tasks in the software development domain.
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_large_code_documentation_generation_php_transfer_learning_finetune | 010079372166f91ee5fc7aae8865cc31fc81b8c1 | 2021-06-23T07:27:48.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_code_documentation_generation_php_transfer_learning_finetune | 5 | null | transformers | 16,236 | ---
tags:
- summarization
widget:
- text: "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
---
# CodeTrans model for code documentation generation php
Pretrained model on programming language php using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the php function/method.
## Intended uses & limitations
The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/php/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 18,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_large_source_code_summarization_sql_multitask_finetune | e9ee9476df92c0b68d707d8039f69ac21c6057c0 | 2021-06-23T09:43:47.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_source_code_summarization_sql_multitask_finetune | 5 | null | transformers | 16,237 | ---
tags:
- summarization
widget:
- text: "select time ( col0 ) from tab0"
---
# CodeTrans model for source code summarization sql
Pretrained model on programming language sql using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the sql code snippets.
## Intended uses & limitations
The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_sql_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_sql_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "select time ( col0 ) from tab0"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/sql/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_large_source_code_summarization_sql_transfer_learning_finetune | 3c68c4338e86e6d91ca4450d045ed7499a0d9f6b | 2021-06-23T09:49:31.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_source_code_summarization_sql_transfer_learning_finetune | 5 | null | transformers | 16,238 | ---
tags:
- summarization
widget:
- text: "select time ( col0 ) from tab0"
---
# CodeTrans model for source code summarization sql
Pretrained model on programming language sql using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the sql code snippets.
## Intended uses & limitations
The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_sql_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_sql_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "select time ( col0 ) from tab0"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/sql/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_code_documentation_generation_javascript_multitask | 1ab70cdf3efc16e54a491633342ae90121d885d4 | 2021-06-23T10:04:56.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_code_documentation_generation_javascript_multitask | 5 | null | transformers | 16,239 | ---
tags:
- summarization
widget:
- text: "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"
---
# CodeTrans model for code documentation generation javascript
Pretrained model on programming language javascript using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_javascript_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_javascript_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/javascript/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_code_documentation_generation_python_transfer_learning_finetune | 98110513f6daacfa6a70a8daced4d11780e66086 | 2021-06-23T10:11:17.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_code_documentation_generation_python_transfer_learning_finetune | 5 | null | transformers | 16,240 | ---
tags:
- summarization
widget:
- text: "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )"
---
# CodeTrans model for code documentation generation python
Pretrained model on programming language python using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the python function/method.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_python_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_python_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/python/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_code_documentation_generation_ruby | 82e2839c8ef193ebbe553fca1a95fd4243b5ba29 | 2021-06-23T10:11:41.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_code_documentation_generation_ruby | 5 | null | transformers | 16,241 | ---
tags:
- summarization
widget:
- text: "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
---
# CodeTrans model for code documentation generation ruby
Pretrained model on programming language ruby using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus ruby dataset.
## Intended uses & limitations
The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby", skip_special_tokens=True),
device=0
)
tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/ruby/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_code_documentation_generation_ruby_transfer_learning_finetune | 6b2c0be17085fde897908696eb33a8c5c27b5c6a | 2021-06-23T10:13:18.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_code_documentation_generation_ruby_transfer_learning_finetune | 5 | null | transformers | 16,242 | ---
tags:
- summarization
widget:
- text: "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
---
# CodeTrans model for code documentation generation ruby
Pretrained model on programming language ruby using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the ruby function/method.
## Intended uses & limitations
The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/ruby/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask_finetune | 31934e052595d478dc70400734dd940f9a118ecb | 2021-06-23T10:20:50.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask_finetune | 5 | null | transformers | 16,243 | ---
tags:
- summarization
widget:
- text: "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
---
# CodeTrans model for source code summarization csharp
Pretrained model on programming language csharp using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the csharp code snippets.
## Intended uses & limitations
The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/csharp/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 1200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_csharp_transfer_learning_finetune | 2510289910fafe984f891e1ec5dba74725ced382 | 2021-06-23T10:21:27.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_source_code_summarization_csharp_transfer_learning_finetune | 5 | null | transformers | 16,244 | ---
tags:
- summarization
widget:
- text: "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
---
# CodeTrans model for source code summarization csharp
Pretrained model on programming language csharp using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the csharp code snippets.
## Intended uses & limitations
The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/csharp/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/legal_t5_small_cls_en | edb51f7610de9456267b70afca9ec31e91cb2b4f | 2021-06-23T10:28:36.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"English",
"dataset:jrc-acquis",
"transformers",
"classification English model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_en | 5 | null | transformers | 16,245 |
---
language: English
tags:
- classification English model
datasets:
- jrc-acquis
widget:
- text: "Appointment of members of the Conciliation Body instituted by Commission Decision 94/442/EC of 1 July 1994 setting up a conciliation procedure in the context of the clearance of the accounts of the European Agricultural Guidance and Guarantee Fund (EAGGF) Guarantee Section (2006/C 193/09) (1) The Commission has renewed the term of office of: Mr José Luis SAENZ GARCIA-BAQUERO (ES) (from 1 August 2006 to 31 July 2007). (2) The Commission has appointed as members: - Mr Peter BAUMANN (DA) (from 1 August 2006 to 31 July 2009); - Mr Daniel PERRIN (FR) (from 1 August 2006 to 31 July 2009). (3) The Commission has appointed as substitute members: - Mr Robert BURIAN (A) (from 1 August 2006); - Mr Eduardo DIEZ PATIER (ES) (from 1 August 2006). --------------------------------------------------"
---
# legal_t5_small_cls_en model
Model for classification of legal text written in English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in English.
### How to use
Here is how to use this model to classify legal text written in English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Appointment of members of the Conciliation Body instituted by Commission Decision 94/442/EC of 1 July 1994 setting up a conciliation procedure in the context of the clearance of the accounts of the European Agricultural Guidance and Guarantee Fund (EAGGF) Guarantee Section (2006/C 193/09) (1) The Commission has renewed the term of office of: Mr José Luis SAENZ GARCIA-BAQUERO (ES) (from 1 August 2006 to 31 July 2007). (2) The Commission has appointed as members: - Mr Peter BAUMANN (DA) (from 1 August 2006 to 31 July 2009); - Mr Daniel PERRIN (FR) (from 1 August 2006 to 31 July 2009). (3) The Commission has appointed as substitute members: - Mr Robert BURIAN (A) (from 1 August 2006); - Mr Eduardo DIEZ PATIER (ES) (from 1 August 2006). --------------------------------------------------"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_cls_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 19 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_en | 0.6247|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_es_sv | af5d1a19c8df74802d22c39fbf6897b02a2916e1 | 2021-06-23T11:06:05.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Spanish Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Swedish model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_es_sv | 5 | null | transformers | 16,246 |
---
language: Spanish Swedish
tags:
- translation Spanish Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Tiempo de uso de la palabra ( artículo 149 del Reglamento PE)"
---
# legal_t5_small_multitask_es_sv model
Model on translating legal text from Spanish to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_es_sv model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Swedish.
### How to use
Here is how to use this model to translate legal text from Spanish to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Tiempo de uso de la palabra ( artículo 149 del Reglamento PE)"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_es_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_es_sv | 37.975|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_es | f4bee452ffb5108a11c07ae7fcbf2a506d93421c | 2021-06-23T11:32:25.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Cszech Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Spanish model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_cs_es | 5 | null | transformers | 16,247 |
---
language: Cszech Spanish
tags:
- translation Cszech Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "k návrhu směrnice Evropského parlamentu a Rady o bezpečnosti hraček"
---
# legal_t5_small_trans_cs_es model
Model on translating legal text from Cszech to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Spanish.
### How to use
Here is how to use this model to translate legal text from Cszech to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "k návrhu směrnice Evropského parlamentu a Rady o bezpečnosti hraček"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_es | 50.77|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_es | 59317022df13612cfe3eb8922cfbdf629d74477d | 2021-06-23T09:29:03.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Spanish model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_de_es | 5 | null | transformers | 16,248 |
---
language: Deustch Spanish
tags:
- translation Deustch Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "7. betont, dass die Kommission und die Mitgliedstaaten die Rolle der Frauen in der Sozialwirtschaft aufgrund der hohen Frauenerwerbstätigkeit in dem Sektor und der Bedeutung der Dienstleistungen, die er für die Förderung der Vereinbarkeit von Beruf und Privatleben bietet, aufwerten, unterstützen und verstärken müssen;"
---
# legal_t5_small_trans_de_es model
Model on translating legal text from Deustch to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Spanish.
### How to use
Here is how to use this model to translate legal text from Deustch to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "7. betont, dass die Kommission und die Mitgliedstaaten die Rolle der Frauen in der Sozialwirtschaft aufgrund der hohen Frauenerwerbstätigkeit in dem Sektor und der Bedeutung der Dienstleistungen, die er für die Förderung der Vereinbarkeit von Beruf und Privatleben bietet, aufwerten, unterstützen und verstärken müssen;"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_es | 47.24|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_en_de_small_finetuned | 62bd32401ccdb4c3b33ddbb4b6db4d3775b1350a | 2021-06-23T09:35:50.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"English Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Deustch model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_en_de_small_finetuned | 5 | null | transformers | 16,249 |
---
language: English Deustch
tags:
- translation English Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "The reference framework for the free movement of workers is laid down in Council Regulation (EEC) No 1612/68 on freedom of movement for workers within the Community and has been revised several times."
---
# legal_t5_small_trans_en_de_small_finetuned model
Model on translating legal text from English to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Deustch.
### How to use
Here is how to use this model to translate legal text from English to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_de_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "The reference framework for the free movement of workers is laid down in Council Regulation (EEC) No 1612/68 on freedom of movement for workers within the Community and has been revised several times."
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_de_small_finetuned | 43.636|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_es_sv_small_finetuned | c2c678ec5992f340cb8ced22e74c1f8426ead1b7 | 2021-06-23T09:49:16.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Spanish Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Swedish model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_es_sv_small_finetuned | 5 | null | transformers | 16,250 |
---
language: Spanish Swedish
tags:
- translation Spanish Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Marie Anne Isler Béguin ,"
---
# legal_t5_small_trans_es_sv_small_finetuned model
Model on translating legal text from Spanish to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_es_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Swedish.
### How to use
Here is how to use this model to translate legal text from Spanish to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_sv_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Marie Anne Isler Béguin ,"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_trans_es_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_es_sv_small_finetuned | 43.838|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_cs_small_finetuned | a31ba74a9fa86b3adc5e45ec5c35544255188e69 | 2021-06-23T09:51:00.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Cszech model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_fr_cs_small_finetuned | 5 | null | transformers | 16,251 |
---
language: French Cszech
tags:
- translation French Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Compte rendu de la délégation à la Convention-cadre des Nations unies sur le changement climatique (COP17) à Durban (Afrique du Sud)"
---
# legal_t5_small_trans_fr_cs_small_finetuned model
Model on translating legal text from French to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Cszech.
### How to use
Here is how to use this model to translate legal text from French to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_cs_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "Compte rendu de la délégation à la Convention-cadre des Nations unies sur le changement climatique (COP17) à Durban (Afrique du Sud)"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_cs_small_finetuned | 44.410|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_cs_small_finetuned | 367441417bfeff16a1af9e49620e7056f3daed87 | 2021-06-23T09:58:54.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Cszech model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_it_cs_small_finetuned | 5 | null | transformers | 16,252 |
---
language: Italian Cszech
tags:
- translation Italian Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Il consiglio di amministrazione è assistito da un comitato esecutivo."
---
# legal_t5_small_trans_it_cs_small_finetuned model
Model on translating legal text from Italian to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Cszech.
### How to use
Here is how to use this model to translate legal text from Italian to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_cs_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Il consiglio di amministrazione è assistito da un comitato esecutivo."
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_cs_small_finetuned | 43.236|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_sv | 0e312f3d33c84bbc00c05d0acf2c9382000c93ab | 2021-06-23T10:04:14.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Swedish model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_it_sv | 5 | null | transformers | 16,253 |
---
language: Italian Swedish
tags:
- translation Italian Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "K. considerando che, come avviene con tutti i sistemi di sanità elettronica, la progettazione, lo sviluppo e l’attuazione di sistemi abilitati alla tecnologia RFID presuppongono il coinvolgimento diretto dei professionisti sanitari, dei pazienti e delle commissioni competenti (per esempio, sulla protezione dei dati e sull’etica),"
---
# legal_t5_small_trans_it_sv model
Model on translating legal text from Italian to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Swedish.
### How to use
Here is how to use this model to translate legal text from Italian to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "K. considerando che, come avviene con tutti i sistemi di sanità elettronica, la progettazione, lo sviluppo e l’attuazione di sistemi abilitati alla tecnologia RFID presuppongono il coinvolgimento diretto dei professionisti sanitari, dei pazienti e delle commissioni competenti (per esempio, sulla protezione dei dati e sull’etica),"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_sv | 41.508|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SanayCo/model_output | eb4a7800627109837a7943ae0e4c3cb00396fa75 | 2021-05-18T22:31:51.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | SanayCo | null | SanayCo/model_output | 5 | null | transformers | 16,254 | Entry not found |
SaulLu/markuplm-base | 70f114dc23b7b5fb72acfc16f9319985623515c5 | 2022-01-10T19:17:34.000Z | [
"pytorch",
"markuplm",
"arxiv:2110.08518",
"transformers"
]
| null | false | SaulLu | null | SaulLu/markuplm-base | 5 | null | transformers | 16,255 | # MarkupLM
**Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)**
## Introduction
MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
|
SauravMaheshkar/clr-finetuned-bert-base-uncased | 1a07b37c37dab3d6112544c2b5b32ea8da319491 | 2021-09-23T15:57:37.000Z | [
"pytorch",
"bert",
"fill-mask",
"dataset:Commonlit-Readibility",
"transformers",
"kaggle",
"license:cc0-1.0",
"autotrain_compatible"
]
| fill-mask | false | SauravMaheshkar | null | SauravMaheshkar/clr-finetuned-bert-base-uncased | 5 | null | transformers | 16,256 | ---
thumbnail: https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true
tags:
- kaggle
license: cc0-1.0
datasets:
- Commonlit-Readibility
---

# FineTuning
| **Architecture** | **Weights** | **Training Loss** | **Validation Loss** |
|:-----------------------:|:---------------:|:----------------:|:----------------------:|
| roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** |
| bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 |
| albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 |
| xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 |
| bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 |
| albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 |
| roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
|
SauravMaheshkar/clr-pretrained-bert-base-uncased | 6e5aef095215a79852335480305808c7502e4a56 | 2021-09-23T15:57:53.000Z | [
"pytorch",
"bert",
"fill-mask",
"dataset:Commonlit-Readibility",
"transformers",
"kaggle",
"license:cc0-1.0",
"autotrain_compatible"
]
| fill-mask | false | SauravMaheshkar | null | SauravMaheshkar/clr-pretrained-bert-base-uncased | 5 | null | transformers | 16,257 | ---
thumbnail: https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true
tags:
- kaggle
license: cc0-1.0
datasets:
- Commonlit-Readibility
metrics:
- Perplexity
---

# PreTraining
| **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** |
|:-----------------------:|:---------------:|:----------------:|:----------------------:|
| roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** |
| bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 |
| electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 |
| albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 |
| electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 |
| electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 |
| distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
|
SauravMaheshkar/clr-pretrained-electra-small | 2d8afb136e5eb5e793892e7a3e0c5f2530f1997f | 2021-09-23T15:58:03.000Z | [
"pytorch",
"electra",
"pretraining",
"dataset:Commonlit-Readibility",
"transformers",
"kaggle",
"license:cc0-1.0"
]
| null | false | SauravMaheshkar | null | SauravMaheshkar/clr-pretrained-electra-small | 5 | null | transformers | 16,258 | ---
thumbnail: https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true
tags:
- kaggle
license: cc0-1.0
datasets:
- Commonlit-Readibility
metrics:
- Perplexity
---

# PreTraining
| **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** |
|:-----------------------:|:---------------:|:----------------:|:----------------------:|
| roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** |
| bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 |
| electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 |
| albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 |
| electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 |
| electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 |
| distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
|
SetFit/deberta-v3-base__sst2__all-train | 25781a34c7fb89c19d359a20e5f68491335f370e | 2022-02-08T08:20:33.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-base__sst2__all-train | 5 | null | transformers | 16,259 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-base__sst2__all-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base__sst2__all-train
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6964
- Accuracy: 0.49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.6964 | 0.49 |
| No log | 2.0 | 14 | 0.7010 | 0.49 |
| No log | 3.0 | 21 | 0.7031 | 0.49 |
| No log | 4.0 | 28 | 0.7054 | 0.49 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-16-1 | 228bfec581d0eb46c966c9b8fcc4e6bafb6d0806 | 2022-02-10T10:27:26.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-16-1 | 5 | null | transformers | 16,260 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-1
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6804
- Accuracy: 0.5497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7086 | 1.0 | 7 | 0.7176 | 0.2857 |
| 0.6897 | 2.0 | 14 | 0.7057 | 0.2857 |
| 0.6491 | 3.0 | 21 | 0.6582 | 0.8571 |
| 0.567 | 4.0 | 28 | 0.4480 | 0.8571 |
| 0.4304 | 5.0 | 35 | 0.5465 | 0.7143 |
| 0.0684 | 6.0 | 42 | 0.5408 | 0.8571 |
| 0.0339 | 7.0 | 49 | 0.6501 | 0.8571 |
| 0.0082 | 8.0 | 56 | 0.9152 | 0.8571 |
| 0.0067 | 9.0 | 63 | 2.5162 | 0.5714 |
| 0.0045 | 10.0 | 70 | 1.1136 | 0.8571 |
| 0.0012 | 11.0 | 77 | 1.1668 | 0.8571 |
| 0.0007 | 12.0 | 84 | 1.2071 | 0.8571 |
| 0.0005 | 13.0 | 91 | 1.2310 | 0.8571 |
| 0.0006 | 14.0 | 98 | 1.2476 | 0.8571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-16-2 | 3c7cceb43b7ac991ff1942f53cd9c690184c689e | 2022-02-10T10:33:22.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-16-2 | 5 | null | transformers | 16,261 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-2
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6959
- Accuracy: 0.5008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7079 | 1.0 | 7 | 0.7361 | 0.2857 |
| 0.6815 | 2.0 | 14 | 0.7659 | 0.2857 |
| 0.6938 | 3.0 | 21 | 0.7944 | 0.2857 |
| 0.4584 | 4.0 | 28 | 1.2441 | 0.2857 |
| 0.4949 | 5.0 | 35 | 1.2285 | 0.5714 |
| 0.0574 | 6.0 | 42 | 1.7796 | 0.5714 |
| 0.0156 | 7.0 | 49 | 2.6027 | 0.5714 |
| 0.0051 | 8.0 | 56 | 2.8717 | 0.5714 |
| 0.0017 | 9.0 | 63 | 2.8491 | 0.5714 |
| 0.0023 | 10.0 | 70 | 1.7149 | 0.7143 |
| 0.001 | 11.0 | 77 | 1.1101 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-16-3 | cb77f1f59502787c1b84ce9681eb4104e5aaa92a | 2022-02-10T10:41:12.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-16-3 | 5 | null | transformers | 16,262 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-3
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6286
- Accuracy: 0.7068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6955 | 1.0 | 7 | 0.7370 | 0.2857 |
| 0.6919 | 2.0 | 14 | 0.6855 | 0.4286 |
| 0.6347 | 3.0 | 21 | 0.5872 | 0.7143 |
| 0.4016 | 4.0 | 28 | 0.6644 | 0.7143 |
| 0.3097 | 5.0 | 35 | 0.5120 | 0.7143 |
| 0.0785 | 6.0 | 42 | 0.5845 | 0.7143 |
| 0.024 | 7.0 | 49 | 0.6951 | 0.7143 |
| 0.0132 | 8.0 | 56 | 0.8972 | 0.7143 |
| 0.0037 | 9.0 | 63 | 1.5798 | 0.7143 |
| 0.0034 | 10.0 | 70 | 1.5178 | 0.7143 |
| 0.003 | 11.0 | 77 | 1.3511 | 0.7143 |
| 0.0012 | 12.0 | 84 | 1.1346 | 0.7143 |
| 0.0007 | 13.0 | 91 | 0.9752 | 0.7143 |
| 0.0008 | 14.0 | 98 | 0.8531 | 0.7143 |
| 0.0007 | 15.0 | 105 | 0.8149 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-16-6 | a4dc0b0b7f6bbd3ea2a1b0269af5d361dcff077c | 2022-02-10T11:01:55.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-16-6 | 5 | null | transformers | 16,263 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-6
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6846
- Accuracy: 0.5058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6673 | 1.0 | 7 | 0.7580 | 0.2857 |
| 0.5896 | 2.0 | 14 | 0.7885 | 0.5714 |
| 0.5294 | 3.0 | 21 | 1.0040 | 0.4286 |
| 0.3163 | 4.0 | 28 | 1.1761 | 0.5714 |
| 0.1315 | 5.0 | 35 | 1.4315 | 0.4286 |
| 0.0312 | 6.0 | 42 | 2.6115 | 0.2857 |
| 0.1774 | 7.0 | 49 | 2.1631 | 0.5714 |
| 0.0052 | 8.0 | 56 | 2.3838 | 0.4286 |
| 0.0043 | 9.0 | 63 | 2.6553 | 0.4286 |
| 0.0032 | 10.0 | 70 | 2.2774 | 0.4286 |
| 0.0015 | 11.0 | 77 | 1.9467 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-8-3 | a94388f5838e52237579af6dddbcc34669525690 | 2022-02-10T08:43:40.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-8-3 | 5 | null | transformers | 16,264 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-3
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6421
- Accuracy: 0.6310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6696 | 1.0 | 3 | 0.7917 | 0.25 |
| 0.6436 | 2.0 | 6 | 0.8107 | 0.25 |
| 0.6923 | 3.0 | 9 | 0.8302 | 0.25 |
| 0.5051 | 4.0 | 12 | 0.9828 | 0.25 |
| 0.3688 | 5.0 | 15 | 0.7402 | 0.25 |
| 0.2671 | 6.0 | 18 | 0.5820 | 0.75 |
| 0.1935 | 7.0 | 21 | 0.8356 | 0.5 |
| 0.0815 | 8.0 | 24 | 1.0431 | 0.25 |
| 0.0591 | 9.0 | 27 | 0.9679 | 0.75 |
| 0.0276 | 10.0 | 30 | 1.0659 | 0.75 |
| 0.0175 | 11.0 | 33 | 0.9689 | 0.75 |
| 0.0152 | 12.0 | 36 | 0.8820 | 0.75 |
| 0.006 | 13.0 | 39 | 0.8337 | 0.75 |
| 0.0041 | 14.0 | 42 | 0.7650 | 0.75 |
| 0.0036 | 15.0 | 45 | 0.6960 | 0.75 |
| 0.0034 | 16.0 | 48 | 0.6548 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-8-4 | a805b1e3d59a63b4a5c2e8a965f7e4829988e6ac | 2022-02-10T09:02:04.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-8-4 | 5 | null | transformers | 16,265 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-4
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3023
- Accuracy: 0.7057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6816 | 1.0 | 3 | 0.8072 | 0.25 |
| 0.6672 | 2.0 | 6 | 0.8740 | 0.25 |
| 0.6667 | 3.0 | 9 | 0.8578 | 0.25 |
| 0.5346 | 4.0 | 12 | 1.0353 | 0.25 |
| 0.4517 | 5.0 | 15 | 1.1030 | 0.25 |
| 0.3095 | 6.0 | 18 | 0.9986 | 0.25 |
| 0.2464 | 7.0 | 21 | 0.9286 | 0.5 |
| 0.1342 | 8.0 | 24 | 0.4063 | 1.0 |
| 0.0851 | 9.0 | 27 | 0.2210 | 1.0 |
| 0.0491 | 10.0 | 30 | 0.2302 | 1.0 |
| 0.0211 | 11.0 | 33 | 0.4020 | 0.75 |
| 0.017 | 12.0 | 36 | 0.2382 | 1.0 |
| 0.0084 | 13.0 | 39 | 0.0852 | 1.0 |
| 0.0051 | 14.0 | 42 | 0.0354 | 1.0 |
| 0.0047 | 15.0 | 45 | 0.0208 | 1.0 |
| 0.0029 | 16.0 | 48 | 0.0155 | 1.0 |
| 0.0022 | 17.0 | 51 | 0.0139 | 1.0 |
| 0.0019 | 18.0 | 54 | 0.0144 | 1.0 |
| 0.0016 | 19.0 | 57 | 0.0168 | 1.0 |
| 0.0013 | 20.0 | 60 | 0.0231 | 1.0 |
| 0.0011 | 21.0 | 63 | 0.0369 | 1.0 |
| 0.0009 | 22.0 | 66 | 0.0528 | 1.0 |
| 0.001 | 23.0 | 69 | 0.0639 | 1.0 |
| 0.0009 | 24.0 | 72 | 0.0670 | 1.0 |
| 0.0009 | 25.0 | 75 | 0.0526 | 1.0 |
| 0.0008 | 26.0 | 78 | 0.0425 | 1.0 |
| 0.0011 | 27.0 | 81 | 0.0135 | 1.0 |
| 0.0007 | 28.0 | 84 | 0.0076 | 1.0 |
| 0.0007 | 29.0 | 87 | 0.0057 | 1.0 |
| 0.0007 | 30.0 | 90 | 0.0049 | 1.0 |
| 0.0008 | 31.0 | 93 | 0.0045 | 1.0 |
| 0.0007 | 32.0 | 96 | 0.0044 | 1.0 |
| 0.0008 | 33.0 | 99 | 0.0043 | 1.0 |
| 0.0005 | 34.0 | 102 | 0.0044 | 1.0 |
| 0.0006 | 35.0 | 105 | 0.0045 | 1.0 |
| 0.0006 | 36.0 | 108 | 0.0046 | 1.0 |
| 0.0007 | 37.0 | 111 | 0.0048 | 1.0 |
| 0.0006 | 38.0 | 114 | 0.0049 | 1.0 |
| 0.0005 | 39.0 | 117 | 0.0050 | 1.0 |
| 0.0005 | 40.0 | 120 | 0.0050 | 1.0 |
| 0.0004 | 41.0 | 123 | 0.0051 | 1.0 |
| 0.0005 | 42.0 | 126 | 0.0051 | 1.0 |
| 0.0004 | 43.0 | 129 | 0.0051 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-8-6 | fcca7b07e29d3e61d8c7c82db2cf48d61bb1ec3d | 2022-02-10T09:46:57.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-8-6 | 5 | null | transformers | 16,266 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-6
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4331
- Accuracy: 0.7106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6486 | 1.0 | 3 | 0.7901 | 0.25 |
| 0.6418 | 2.0 | 6 | 0.9259 | 0.25 |
| 0.6169 | 3.0 | 9 | 1.0574 | 0.25 |
| 0.5639 | 4.0 | 12 | 1.1372 | 0.25 |
| 0.4562 | 5.0 | 15 | 0.6090 | 0.5 |
| 0.3105 | 6.0 | 18 | 0.4435 | 1.0 |
| 0.2303 | 7.0 | 21 | 0.2804 | 1.0 |
| 0.1388 | 8.0 | 24 | 0.2205 | 1.0 |
| 0.0918 | 9.0 | 27 | 0.1282 | 1.0 |
| 0.0447 | 10.0 | 30 | 0.0643 | 1.0 |
| 0.0297 | 11.0 | 33 | 0.0361 | 1.0 |
| 0.0159 | 12.0 | 36 | 0.0211 | 1.0 |
| 0.0102 | 13.0 | 39 | 0.0155 | 1.0 |
| 0.0061 | 14.0 | 42 | 0.0158 | 1.0 |
| 0.0049 | 15.0 | 45 | 0.0189 | 1.0 |
| 0.0035 | 16.0 | 48 | 0.0254 | 1.0 |
| 0.0027 | 17.0 | 51 | 0.0305 | 1.0 |
| 0.0021 | 18.0 | 54 | 0.0287 | 1.0 |
| 0.0016 | 19.0 | 57 | 0.0215 | 1.0 |
| 0.0016 | 20.0 | 60 | 0.0163 | 1.0 |
| 0.0014 | 21.0 | 63 | 0.0138 | 1.0 |
| 0.0015 | 22.0 | 66 | 0.0131 | 1.0 |
| 0.001 | 23.0 | 69 | 0.0132 | 1.0 |
| 0.0014 | 24.0 | 72 | 0.0126 | 1.0 |
| 0.0011 | 25.0 | 75 | 0.0125 | 1.0 |
| 0.001 | 26.0 | 78 | 0.0119 | 1.0 |
| 0.0008 | 27.0 | 81 | 0.0110 | 1.0 |
| 0.0007 | 28.0 | 84 | 0.0106 | 1.0 |
| 0.0008 | 29.0 | 87 | 0.0095 | 1.0 |
| 0.0009 | 30.0 | 90 | 0.0089 | 1.0 |
| 0.0008 | 31.0 | 93 | 0.0083 | 1.0 |
| 0.0007 | 32.0 | 96 | 0.0075 | 1.0 |
| 0.0008 | 33.0 | 99 | 0.0066 | 1.0 |
| 0.0006 | 34.0 | 102 | 0.0059 | 1.0 |
| 0.0007 | 35.0 | 105 | 0.0054 | 1.0 |
| 0.0008 | 36.0 | 108 | 0.0051 | 1.0 |
| 0.0007 | 37.0 | 111 | 0.0049 | 1.0 |
| 0.0007 | 38.0 | 114 | 0.0047 | 1.0 |
| 0.0006 | 39.0 | 117 | 0.0045 | 1.0 |
| 0.0006 | 40.0 | 120 | 0.0046 | 1.0 |
| 0.0005 | 41.0 | 123 | 0.0045 | 1.0 |
| 0.0006 | 42.0 | 126 | 0.0044 | 1.0 |
| 0.0006 | 43.0 | 129 | 0.0043 | 1.0 |
| 0.0006 | 44.0 | 132 | 0.0044 | 1.0 |
| 0.0005 | 45.0 | 135 | 0.0045 | 1.0 |
| 0.0006 | 46.0 | 138 | 0.0043 | 1.0 |
| 0.0006 | 47.0 | 141 | 0.0043 | 1.0 |
| 0.0006 | 48.0 | 144 | 0.0041 | 1.0 |
| 0.0007 | 49.0 | 147 | 0.0042 | 1.0 |
| 0.0005 | 50.0 | 150 | 0.0042 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-8-9 | 3d9b0c95622815f22bf9355b0f6740accbb289c7 | 2022-02-10T10:10:14.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-8-9 | 5 | null | transformers | 16,267 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-9
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6013
- Accuracy: 0.7210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6757 | 1.0 | 3 | 0.7810 | 0.25 |
| 0.6506 | 2.0 | 6 | 0.8102 | 0.25 |
| 0.6463 | 3.0 | 9 | 0.8313 | 0.25 |
| 0.5813 | 4.0 | 12 | 0.8858 | 0.25 |
| 0.4635 | 5.0 | 15 | 0.8220 | 0.25 |
| 0.3992 | 6.0 | 18 | 0.7226 | 0.5 |
| 0.3281 | 7.0 | 21 | 0.6707 | 0.75 |
| 0.2276 | 8.0 | 24 | 0.7515 | 0.75 |
| 0.1674 | 9.0 | 27 | 0.6971 | 0.75 |
| 0.0873 | 10.0 | 30 | 0.5419 | 0.75 |
| 0.0525 | 11.0 | 33 | 0.5025 | 0.75 |
| 0.0286 | 12.0 | 36 | 0.5229 | 0.75 |
| 0.0149 | 13.0 | 39 | 0.5660 | 0.75 |
| 0.0082 | 14.0 | 42 | 0.6954 | 0.75 |
| 0.006 | 15.0 | 45 | 0.8649 | 0.75 |
| 0.0043 | 16.0 | 48 | 1.0011 | 0.75 |
| 0.0035 | 17.0 | 51 | 1.0909 | 0.75 |
| 0.0021 | 18.0 | 54 | 1.1615 | 0.75 |
| 0.0017 | 19.0 | 57 | 1.2147 | 0.75 |
| 0.0013 | 20.0 | 60 | 1.2585 | 0.75 |
| 0.0016 | 21.0 | 63 | 1.2917 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-1 | 752da4bb58b9d36dec8cb28e47885386f82ae71b | 2022-02-10T07:50:12.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-1 | 5 | null | transformers | 16,268 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0424
- Accuracy: 0.5355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0989 | 1.0 | 10 | 1.1049 | 0.1 |
| 1.0641 | 2.0 | 20 | 1.0768 | 0.3 |
| 0.9742 | 3.0 | 30 | 1.0430 | 0.4 |
| 0.8765 | 4.0 | 40 | 1.0058 | 0.4 |
| 0.6979 | 5.0 | 50 | 0.8488 | 0.7 |
| 0.563 | 6.0 | 60 | 0.7221 | 0.7 |
| 0.4135 | 7.0 | 70 | 0.6587 | 0.8 |
| 0.2509 | 8.0 | 80 | 0.5577 | 0.7 |
| 0.0943 | 9.0 | 90 | 0.5840 | 0.7 |
| 0.0541 | 10.0 | 100 | 0.6959 | 0.7 |
| 0.0362 | 11.0 | 110 | 0.6884 | 0.6 |
| 0.0254 | 12.0 | 120 | 0.9263 | 0.6 |
| 0.0184 | 13.0 | 130 | 0.7992 | 0.6 |
| 0.0172 | 14.0 | 140 | 0.7351 | 0.6 |
| 0.0131 | 15.0 | 150 | 0.7664 | 0.6 |
| 0.0117 | 16.0 | 160 | 0.8262 | 0.6 |
| 0.0101 | 17.0 | 170 | 0.8839 | 0.6 |
| 0.0089 | 18.0 | 180 | 0.9018 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-2 | f3d788beeb9ffef46052da43d3cd775355f4f8a2 | 2022-02-10T07:51:21.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-2 | 5 | null | transformers | 16,269 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9210
- Accuracy: 0.5635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0915 | 1.0 | 10 | 1.1051 | 0.4 |
| 1.0663 | 2.0 | 20 | 1.0794 | 0.3 |
| 1.0307 | 3.0 | 30 | 1.0664 | 0.5 |
| 0.9443 | 4.0 | 40 | 1.0729 | 0.5 |
| 0.8373 | 5.0 | 50 | 1.0175 | 0.4 |
| 0.6892 | 6.0 | 60 | 0.9624 | 0.5 |
| 0.538 | 7.0 | 70 | 0.9924 | 0.5 |
| 0.4173 | 8.0 | 80 | 1.0136 | 0.6 |
| 0.1846 | 9.0 | 90 | 1.0683 | 0.6 |
| 0.1125 | 10.0 | 100 | 1.2376 | 0.6 |
| 0.0754 | 11.0 | 110 | 1.2537 | 0.6 |
| 0.0401 | 12.0 | 120 | 1.4387 | 0.6 |
| 0.0285 | 13.0 | 130 | 1.5702 | 0.6 |
| 0.0241 | 14.0 | 140 | 1.6795 | 0.6 |
| 0.0175 | 15.0 | 150 | 1.7228 | 0.6 |
| 0.0147 | 16.0 | 160 | 1.7892 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-3 | 1b1023bb8dcac244acb35d3c68c5afcd2a6e6b08 | 2022-02-10T07:52:27.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-3 | 5 | null | transformers | 16,270 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0675
- Accuracy: 0.44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0951 | 1.0 | 10 | 1.1346 | 0.1 |
| 1.0424 | 2.0 | 20 | 1.1120 | 0.2 |
| 0.957 | 3.0 | 30 | 1.1002 | 0.3 |
| 0.7889 | 4.0 | 40 | 1.0838 | 0.4 |
| 0.6162 | 5.0 | 50 | 1.0935 | 0.5 |
| 0.4849 | 6.0 | 60 | 1.0867 | 0.5 |
| 0.3089 | 7.0 | 70 | 1.1145 | 0.5 |
| 0.2145 | 8.0 | 80 | 1.1278 | 0.6 |
| 0.0805 | 9.0 | 90 | 1.2801 | 0.6 |
| 0.0497 | 10.0 | 100 | 1.3296 | 0.6 |
| 0.0328 | 11.0 | 110 | 1.2913 | 0.6 |
| 0.0229 | 12.0 | 120 | 1.3692 | 0.6 |
| 0.0186 | 13.0 | 130 | 1.4642 | 0.6 |
| 0.0161 | 14.0 | 140 | 1.5568 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-4 | 559e5ec22d824b31ed009b42a02706092925e935 | 2022-02-10T07:53:38.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-4 | 5 | null | transformers | 16,271 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0903
- Accuracy: 0.4805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0974 | 1.0 | 10 | 1.1139 | 0.1 |
| 1.0637 | 2.0 | 20 | 1.0988 | 0.1 |
| 0.9758 | 3.0 | 30 | 1.1013 | 0.1 |
| 0.9012 | 4.0 | 40 | 1.0769 | 0.3 |
| 0.6993 | 5.0 | 50 | 1.0484 | 0.6 |
| 0.5676 | 6.0 | 60 | 1.0223 | 0.6 |
| 0.4069 | 7.0 | 70 | 0.9190 | 0.6 |
| 0.3192 | 8.0 | 80 | 1.1370 | 0.6 |
| 0.1112 | 9.0 | 90 | 1.1728 | 0.6 |
| 0.07 | 10.0 | 100 | 1.1998 | 0.6 |
| 0.0397 | 11.0 | 110 | 1.3700 | 0.6 |
| 0.027 | 12.0 | 120 | 1.3329 | 0.6 |
| 0.021 | 13.0 | 130 | 1.2697 | 0.6 |
| 0.0177 | 14.0 | 140 | 1.4195 | 0.6 |
| 0.0142 | 15.0 | 150 | 1.5342 | 0.6 |
| 0.0118 | 16.0 | 160 | 1.5999 | 0.6 |
| 0.0108 | 17.0 | 170 | 1.6327 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-5 | c35fde6d8f4447e859572569cdfd6cba483f4b4b | 2022-02-10T07:54:46.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-5 | 5 | null | transformers | 16,272 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9907
- Accuracy: 0.49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0941 | 1.0 | 10 | 1.1287 | 0.2 |
| 1.0481 | 2.0 | 20 | 1.1136 | 0.2 |
| 0.9498 | 3.0 | 30 | 1.1200 | 0.2 |
| 0.8157 | 4.0 | 40 | 1.0771 | 0.2 |
| 0.65 | 5.0 | 50 | 0.9733 | 0.4 |
| 0.5021 | 6.0 | 60 | 1.0626 | 0.4 |
| 0.3358 | 7.0 | 70 | 1.0787 | 0.4 |
| 0.2017 | 8.0 | 80 | 1.3183 | 0.4 |
| 0.088 | 9.0 | 90 | 1.2204 | 0.5 |
| 0.0527 | 10.0 | 100 | 1.6892 | 0.4 |
| 0.0337 | 11.0 | 110 | 1.6967 | 0.5 |
| 0.0238 | 12.0 | 120 | 1.5436 | 0.5 |
| 0.0183 | 13.0 | 130 | 1.7447 | 0.4 |
| 0.0159 | 14.0 | 140 | 1.8999 | 0.4 |
| 0.014 | 15.0 | 150 | 1.9004 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-6 | e2f121b3afd16e0406cfba80e63cf2ddaae85597 | 2022-02-10T07:55:56.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-6 | 5 | null | transformers | 16,273 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8331
- Accuracy: 0.625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0881 | 1.0 | 10 | 1.1248 | 0.1 |
| 1.0586 | 2.0 | 20 | 1.1162 | 0.2 |
| 0.9834 | 3.0 | 30 | 1.1199 | 0.3 |
| 0.9271 | 4.0 | 40 | 1.0740 | 0.3 |
| 0.7663 | 5.0 | 50 | 1.0183 | 0.5 |
| 0.6042 | 6.0 | 60 | 1.0259 | 0.5 |
| 0.4482 | 7.0 | 70 | 0.8699 | 0.7 |
| 0.3072 | 8.0 | 80 | 1.0615 | 0.5 |
| 0.1458 | 9.0 | 90 | 1.0164 | 0.5 |
| 0.0838 | 10.0 | 100 | 1.0620 | 0.5 |
| 0.055 | 11.0 | 110 | 1.1829 | 0.5 |
| 0.0347 | 12.0 | 120 | 1.2815 | 0.4 |
| 0.0244 | 13.0 | 130 | 1.2607 | 0.6 |
| 0.0213 | 14.0 | 140 | 1.3695 | 0.5 |
| 0.0169 | 15.0 | 150 | 1.4397 | 0.5 |
| 0.0141 | 16.0 | 160 | 1.4388 | 0.6 |
| 0.0122 | 17.0 | 170 | 1.4242 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-9 | 6ca41b234ffd7dead95be68ada9e8288974e1479 | 2022-02-10T07:59:15.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-9 | 5 | null | transformers | 16,274 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1121
- Accuracy: 0.16
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1038 | 1.0 | 10 | 1.1243 | 0.1 |
| 1.0859 | 2.0 | 20 | 1.1182 | 0.2 |
| 1.0234 | 3.0 | 30 | 1.1442 | 0.3 |
| 0.9493 | 4.0 | 40 | 1.2239 | 0.1 |
| 0.8114 | 5.0 | 50 | 1.2023 | 0.4 |
| 0.6464 | 6.0 | 60 | 1.2329 | 0.4 |
| 0.4731 | 7.0 | 70 | 1.2971 | 0.5 |
| 0.3355 | 8.0 | 80 | 1.3913 | 0.4 |
| 0.1268 | 9.0 | 90 | 1.4670 | 0.5 |
| 0.0747 | 10.0 | 100 | 1.7961 | 0.4 |
| 0.0449 | 11.0 | 110 | 1.8168 | 0.5 |
| 0.0307 | 12.0 | 120 | 1.9307 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-0 | 0fc8ee51756ab85370c4e5799bcb7eb20cc97f5b | 2022-02-10T08:00:38.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-0 | 5 | null | transformers | 16,275 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7714
- Accuracy: 0.705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0871 | 1.0 | 19 | 1.0704 | 0.45 |
| 1.0019 | 2.0 | 38 | 1.0167 | 0.55 |
| 0.8412 | 3.0 | 57 | 0.9134 | 0.55 |
| 0.6047 | 4.0 | 76 | 0.8430 | 0.6 |
| 0.3746 | 5.0 | 95 | 0.8315 | 0.6 |
| 0.1885 | 6.0 | 114 | 0.8585 | 0.6 |
| 0.0772 | 7.0 | 133 | 0.9443 | 0.65 |
| 0.0312 | 8.0 | 152 | 1.1019 | 0.65 |
| 0.0161 | 9.0 | 171 | 1.1420 | 0.65 |
| 0.0102 | 10.0 | 190 | 1.2773 | 0.65 |
| 0.0077 | 11.0 | 209 | 1.2454 | 0.65 |
| 0.0064 | 12.0 | 228 | 1.2785 | 0.65 |
| 0.006 | 13.0 | 247 | 1.3834 | 0.65 |
| 0.0045 | 14.0 | 266 | 1.4139 | 0.65 |
| 0.0043 | 15.0 | 285 | 1.4056 | 0.65 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-2 | 44b2982c8c1676a2612f6be9437ba37b58435e1d | 2022-02-10T08:02:54.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-2 | 5 | null | transformers | 16,276 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7136
- Accuracy: 0.679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1052 | 1.0 | 19 | 1.0726 | 0.45 |
| 1.0421 | 2.0 | 38 | 1.0225 | 0.5 |
| 0.9173 | 3.0 | 57 | 0.9164 | 0.6 |
| 0.6822 | 4.0 | 76 | 0.8251 | 0.7 |
| 0.4407 | 5.0 | 95 | 0.8908 | 0.5 |
| 0.2367 | 6.0 | 114 | 0.6772 | 0.75 |
| 0.1145 | 7.0 | 133 | 0.7792 | 0.65 |
| 0.0479 | 8.0 | 152 | 1.0657 | 0.6 |
| 0.0186 | 9.0 | 171 | 1.2228 | 0.65 |
| 0.0111 | 10.0 | 190 | 1.1100 | 0.6 |
| 0.0083 | 11.0 | 209 | 1.1991 | 0.65 |
| 0.0067 | 12.0 | 228 | 1.2654 | 0.65 |
| 0.0061 | 13.0 | 247 | 1.2837 | 0.65 |
| 0.0046 | 14.0 | 266 | 1.2860 | 0.6 |
| 0.0043 | 15.0 | 285 | 1.3160 | 0.65 |
| 0.0037 | 16.0 | 304 | 1.3323 | 0.65 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-5 | 5bcadc624a6deb964d7548f2429632bb19f85d98 | 2022-02-10T08:06:38.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-5 | 5 | null | transformers | 16,277 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1327
- Accuracy: 0.57
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0972 | 1.0 | 19 | 1.0470 | 0.45 |
| 0.9738 | 2.0 | 38 | 0.9244 | 0.65 |
| 0.7722 | 3.0 | 57 | 0.8612 | 0.65 |
| 0.4929 | 4.0 | 76 | 0.6759 | 0.75 |
| 0.2435 | 5.0 | 95 | 0.7273 | 0.7 |
| 0.0929 | 6.0 | 114 | 0.6444 | 0.85 |
| 0.0357 | 7.0 | 133 | 0.7671 | 0.8 |
| 0.0173 | 8.0 | 152 | 0.7599 | 0.75 |
| 0.0121 | 9.0 | 171 | 0.8140 | 0.8 |
| 0.0081 | 10.0 | 190 | 0.7861 | 0.8 |
| 0.0066 | 11.0 | 209 | 0.8318 | 0.8 |
| 0.0057 | 12.0 | 228 | 0.8777 | 0.8 |
| 0.0053 | 13.0 | 247 | 0.8501 | 0.8 |
| 0.004 | 14.0 | 266 | 0.8603 | 0.8 |
| 0.004 | 15.0 | 285 | 0.8787 | 0.8 |
| 0.0034 | 16.0 | 304 | 0.8969 | 0.8 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-9 | 7c047362b64c3e987550dc859e15a11c68d8e058 | 2022-02-10T08:11:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-9 | 5 | null | transformers | 16,278 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7075
- Accuracy: 0.692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1054 | 1.0 | 19 | 1.0938 | 0.35 |
| 1.0338 | 2.0 | 38 | 1.0563 | 0.65 |
| 0.8622 | 3.0 | 57 | 0.9372 | 0.6 |
| 0.5919 | 4.0 | 76 | 0.8461 | 0.6 |
| 0.3357 | 5.0 | 95 | 1.0206 | 0.45 |
| 0.1621 | 6.0 | 114 | 0.9802 | 0.7 |
| 0.0637 | 7.0 | 133 | 1.2434 | 0.65 |
| 0.0261 | 8.0 | 152 | 1.3865 | 0.65 |
| 0.0156 | 9.0 | 171 | 1.4414 | 0.7 |
| 0.01 | 10.0 | 190 | 1.5502 | 0.7 |
| 0.0079 | 11.0 | 209 | 1.6102 | 0.7 |
| 0.0062 | 12.0 | 228 | 1.6525 | 0.7 |
| 0.0058 | 13.0 | 247 | 1.6884 | 0.7 |
| 0.0046 | 14.0 | 266 | 1.7479 | 0.7 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-1 | 33a423bec09c69cc4f799d1a4abdfb4830cdc964 | 2022-02-10T07:40:19.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-1 | 5 | null | transformers | 16,279 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1013
- Accuracy: 0.0915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0866 | 1.0 | 5 | 1.1363 | 0.0 |
| 1.0439 | 2.0 | 10 | 1.1803 | 0.0 |
| 1.0227 | 3.0 | 15 | 1.2162 | 0.2 |
| 0.9111 | 4.0 | 20 | 1.2619 | 0.0 |
| 0.8243 | 5.0 | 25 | 1.2929 | 0.2 |
| 0.7488 | 6.0 | 30 | 1.3010 | 0.2 |
| 0.62 | 7.0 | 35 | 1.3011 | 0.2 |
| 0.5054 | 8.0 | 40 | 1.2931 | 0.4 |
| 0.4191 | 9.0 | 45 | 1.3274 | 0.4 |
| 0.4107 | 10.0 | 50 | 1.3259 | 0.4 |
| 0.3376 | 11.0 | 55 | 1.2800 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-2 | 95a95894a3c90c6325a2ff58746156223e3f9a63 | 2022-02-10T07:41:07.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-2 | 5 | null | transformers | 16,280 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1019
- Accuracy: 0.139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1082 | 1.0 | 5 | 1.1432 | 0.0 |
| 1.0524 | 2.0 | 10 | 1.1613 | 0.0 |
| 1.0641 | 3.0 | 15 | 1.1547 | 0.0 |
| 0.9592 | 4.0 | 20 | 1.1680 | 0.0 |
| 0.9085 | 5.0 | 25 | 1.1762 | 0.0 |
| 0.8508 | 6.0 | 30 | 1.1809 | 0.2 |
| 0.7263 | 7.0 | 35 | 1.1912 | 0.2 |
| 0.6448 | 8.0 | 40 | 1.2100 | 0.2 |
| 0.5378 | 9.0 | 45 | 1.2037 | 0.2 |
| 0.5031 | 10.0 | 50 | 1.2096 | 0.2 |
| 0.4041 | 11.0 | 55 | 1.2203 | 0.2 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-3 | e5c25c54a54737e08629a07b56944111f9bdd10f | 2022-02-10T07:42:05.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-3 | 5 | null | transformers | 16,281 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9681
- Accuracy: 0.549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1073 | 1.0 | 5 | 1.1393 | 0.0 |
| 1.0392 | 2.0 | 10 | 1.1729 | 0.0 |
| 1.0302 | 3.0 | 15 | 1.1694 | 0.2 |
| 0.9176 | 4.0 | 20 | 1.1846 | 0.2 |
| 0.8339 | 5.0 | 25 | 1.1663 | 0.2 |
| 0.7533 | 6.0 | 30 | 1.1513 | 0.4 |
| 0.6327 | 7.0 | 35 | 1.1474 | 0.4 |
| 0.4402 | 8.0 | 40 | 1.1385 | 0.4 |
| 0.3752 | 9.0 | 45 | 1.0965 | 0.2 |
| 0.3448 | 10.0 | 50 | 1.0357 | 0.2 |
| 0.2582 | 11.0 | 55 | 1.0438 | 0.2 |
| 0.1903 | 12.0 | 60 | 1.0561 | 0.2 |
| 0.1479 | 13.0 | 65 | 1.0569 | 0.2 |
| 0.1129 | 14.0 | 70 | 1.0455 | 0.2 |
| 0.1071 | 15.0 | 75 | 1.0416 | 0.4 |
| 0.0672 | 16.0 | 80 | 1.1164 | 0.4 |
| 0.0561 | 17.0 | 85 | 1.1846 | 0.6 |
| 0.0463 | 18.0 | 90 | 1.2040 | 0.6 |
| 0.0431 | 19.0 | 95 | 1.2078 | 0.6 |
| 0.0314 | 20.0 | 100 | 1.2368 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-4 | 1680542a3adb1f19763cbc446572d810c2e8847c | 2022-02-10T07:42:59.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-4 | 5 | null | transformers | 16,282 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1045
- Accuracy: 0.128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1115 | 1.0 | 5 | 1.1174 | 0.0 |
| 1.0518 | 2.0 | 10 | 1.1379 | 0.0 |
| 1.0445 | 3.0 | 15 | 1.1287 | 0.0 |
| 0.9306 | 4.0 | 20 | 1.1324 | 0.2 |
| 0.8242 | 5.0 | 25 | 1.1219 | 0.2 |
| 0.7986 | 6.0 | 30 | 1.1369 | 0.4 |
| 0.7369 | 7.0 | 35 | 1.1732 | 0.2 |
| 0.534 | 8.0 | 40 | 1.1828 | 0.6 |
| 0.4285 | 9.0 | 45 | 1.1482 | 0.6 |
| 0.3691 | 10.0 | 50 | 1.1401 | 0.6 |
| 0.3215 | 11.0 | 55 | 1.1286 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-9 | 3c2056a99d306db9801478e400ac0081d61a518e | 2022-02-10T07:47:46.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-9 | 5 | null | transformers | 16,283 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0959
- Accuracy: 0.093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1068 | 1.0 | 5 | 1.1545 | 0.0 |
| 1.0494 | 2.0 | 10 | 1.1971 | 0.0 |
| 1.0612 | 3.0 | 15 | 1.2164 | 0.0 |
| 0.9517 | 4.0 | 20 | 1.2545 | 0.0 |
| 0.8874 | 5.0 | 25 | 1.2699 | 0.0 |
| 0.8598 | 6.0 | 30 | 1.2835 | 0.0 |
| 0.7006 | 7.0 | 35 | 1.3139 | 0.0 |
| 0.5969 | 8.0 | 40 | 1.3116 | 0.2 |
| 0.4769 | 9.0 | 45 | 1.3124 | 0.4 |
| 0.4352 | 10.0 | 50 | 1.3541 | 0.4 |
| 0.3231 | 11.0 | 55 | 1.3919 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-16-1 | c4278da3a761698eac84bdfa926b7e46ed270c68 | 2022-02-10T07:19:37.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__sst2__train-16-1 | 5 | null | transformers | 16,284 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6012
- Accuracy: 0.6766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6983 | 1.0 | 7 | 0.7036 | 0.2857 |
| 0.6836 | 2.0 | 14 | 0.7181 | 0.2857 |
| 0.645 | 3.0 | 21 | 0.7381 | 0.2857 |
| 0.5902 | 4.0 | 28 | 0.7746 | 0.2857 |
| 0.5799 | 5.0 | 35 | 0.7242 | 0.5714 |
| 0.3584 | 6.0 | 42 | 0.6935 | 0.5714 |
| 0.2596 | 7.0 | 49 | 0.7041 | 0.5714 |
| 0.1815 | 8.0 | 56 | 0.5930 | 0.7143 |
| 0.0827 | 9.0 | 63 | 0.6976 | 0.7143 |
| 0.0613 | 10.0 | 70 | 0.7346 | 0.7143 |
| 0.0356 | 11.0 | 77 | 0.6992 | 0.5714 |
| 0.0158 | 12.0 | 84 | 0.7328 | 0.5714 |
| 0.013 | 13.0 | 91 | 0.7819 | 0.5714 |
| 0.0103 | 14.0 | 98 | 0.8589 | 0.5714 |
| 0.0087 | 15.0 | 105 | 0.9177 | 0.5714 |
| 0.0076 | 16.0 | 112 | 0.9519 | 0.5714 |
| 0.0078 | 17.0 | 119 | 0.9556 | 0.5714 |
| 0.006 | 18.0 | 126 | 0.9542 | 0.5714 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-32-1 | b433654fe3de0c798a9c21f413cdca7ef2f88fe8 | 2022-02-10T07:29:19.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__sst2__train-32-1 | 5 | null | transformers | 16,285 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6492
- Accuracy: 0.6551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7106 | 1.0 | 13 | 0.6850 | 0.6154 |
| 0.631 | 2.0 | 26 | 0.6632 | 0.6923 |
| 0.5643 | 3.0 | 39 | 0.6247 | 0.7692 |
| 0.3992 | 4.0 | 52 | 0.5948 | 0.7692 |
| 0.1928 | 5.0 | 65 | 0.5803 | 0.7692 |
| 0.0821 | 6.0 | 78 | 0.6404 | 0.6923 |
| 0.0294 | 7.0 | 91 | 0.7387 | 0.6923 |
| 0.0141 | 8.0 | 104 | 0.8270 | 0.6923 |
| 0.0082 | 9.0 | 117 | 0.8496 | 0.6923 |
| 0.0064 | 10.0 | 130 | 0.8679 | 0.6923 |
| 0.005 | 11.0 | 143 | 0.8914 | 0.6923 |
| 0.0036 | 12.0 | 156 | 0.9278 | 0.6923 |
| 0.0031 | 13.0 | 169 | 0.9552 | 0.6923 |
| 0.0029 | 14.0 | 182 | 0.9745 | 0.6923 |
| 0.0028 | 15.0 | 195 | 0.9785 | 0.6923 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-32-4 | 80f38513cce15ca1b4ec577dde7560298c106fe3 | 2022-02-10T07:32:01.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__sst2__train-32-4 | 5 | null | transformers | 16,286 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5001
- Accuracy: 0.7650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7175 | 1.0 | 13 | 0.6822 | 0.5385 |
| 0.6559 | 2.0 | 26 | 0.6533 | 0.6154 |
| 0.6052 | 3.0 | 39 | 0.5762 | 0.7692 |
| 0.4587 | 4.0 | 52 | 0.4477 | 0.8462 |
| 0.2459 | 5.0 | 65 | 0.4288 | 0.7692 |
| 0.1001 | 6.0 | 78 | 0.5219 | 0.7692 |
| 0.0308 | 7.0 | 91 | 0.8540 | 0.7692 |
| 0.014 | 8.0 | 104 | 0.7789 | 0.7692 |
| 0.0083 | 9.0 | 117 | 0.7996 | 0.7692 |
| 0.0064 | 10.0 | 130 | 0.8342 | 0.7692 |
| 0.0049 | 11.0 | 143 | 0.8612 | 0.7692 |
| 0.0036 | 12.0 | 156 | 0.8834 | 0.7692 |
| 0.0032 | 13.0 | 169 | 0.9067 | 0.7692 |
| 0.003 | 14.0 | 182 | 0.9332 | 0.7692 |
| 0.0028 | 15.0 | 195 | 0.9511 | 0.7692 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-32-8 | b3a0768ce23436a27322dc546979320473ff546b | 2022-02-10T07:35:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__sst2__train-32-8 | 5 | null | transformers | 16,287 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6880
- Accuracy: 0.5014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.712 | 1.0 | 13 | 0.6936 | 0.5385 |
| 0.665 | 2.0 | 26 | 0.6960 | 0.3846 |
| 0.6112 | 3.0 | 39 | 0.7138 | 0.3846 |
| 0.4521 | 4.0 | 52 | 0.8243 | 0.4615 |
| 0.2627 | 5.0 | 65 | 0.7723 | 0.6154 |
| 0.0928 | 6.0 | 78 | 1.2666 | 0.5385 |
| 0.0312 | 7.0 | 91 | 1.2306 | 0.6154 |
| 0.0132 | 8.0 | 104 | 1.3385 | 0.6154 |
| 0.0082 | 9.0 | 117 | 1.4584 | 0.6154 |
| 0.0063 | 10.0 | 130 | 1.5429 | 0.6154 |
| 0.0049 | 11.0 | 143 | 1.5913 | 0.6154 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__subj__train-8-4 | 00703a6c60e53799c895da5db58d043f259f752c | 2022-02-09T20:25:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__subj__train-8-4 | 5 | null | transformers | 16,288 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3305
- Accuracy: 0.8565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6991 | 1.0 | 3 | 0.6772 | 0.75 |
| 0.6707 | 2.0 | 6 | 0.6704 | 0.75 |
| 0.6402 | 3.0 | 9 | 0.6608 | 1.0 |
| 0.5789 | 4.0 | 12 | 0.6547 | 0.75 |
| 0.5211 | 5.0 | 15 | 0.6434 | 0.75 |
| 0.454 | 6.0 | 18 | 0.6102 | 1.0 |
| 0.4187 | 7.0 | 21 | 0.5701 | 1.0 |
| 0.3401 | 8.0 | 24 | 0.5289 | 1.0 |
| 0.3107 | 9.0 | 27 | 0.4737 | 1.0 |
| 0.2381 | 10.0 | 30 | 0.4255 | 1.0 |
| 0.1982 | 11.0 | 33 | 0.3685 | 1.0 |
| 0.1631 | 12.0 | 36 | 0.3200 | 1.0 |
| 0.1234 | 13.0 | 39 | 0.2798 | 1.0 |
| 0.0993 | 14.0 | 42 | 0.2455 | 1.0 |
| 0.0781 | 15.0 | 45 | 0.2135 | 1.0 |
| 0.0586 | 16.0 | 48 | 0.1891 | 1.0 |
| 0.0513 | 17.0 | 51 | 0.1671 | 1.0 |
| 0.043 | 18.0 | 54 | 0.1427 | 1.0 |
| 0.0307 | 19.0 | 57 | 0.1225 | 1.0 |
| 0.0273 | 20.0 | 60 | 0.1060 | 1.0 |
| 0.0266 | 21.0 | 63 | 0.0920 | 1.0 |
| 0.0233 | 22.0 | 66 | 0.0823 | 1.0 |
| 0.0185 | 23.0 | 69 | 0.0751 | 1.0 |
| 0.0173 | 24.0 | 72 | 0.0698 | 1.0 |
| 0.0172 | 25.0 | 75 | 0.0651 | 1.0 |
| 0.0142 | 26.0 | 78 | 0.0613 | 1.0 |
| 0.0151 | 27.0 | 81 | 0.0583 | 1.0 |
| 0.0117 | 28.0 | 84 | 0.0563 | 1.0 |
| 0.0123 | 29.0 | 87 | 0.0546 | 1.0 |
| 0.0121 | 30.0 | 90 | 0.0531 | 1.0 |
| 0.0123 | 31.0 | 93 | 0.0511 | 1.0 |
| 0.0112 | 32.0 | 96 | 0.0496 | 1.0 |
| 0.0103 | 33.0 | 99 | 0.0481 | 1.0 |
| 0.0086 | 34.0 | 102 | 0.0468 | 1.0 |
| 0.0096 | 35.0 | 105 | 0.0457 | 1.0 |
| 0.0107 | 36.0 | 108 | 0.0447 | 1.0 |
| 0.0095 | 37.0 | 111 | 0.0439 | 1.0 |
| 0.0102 | 38.0 | 114 | 0.0429 | 1.0 |
| 0.0077 | 39.0 | 117 | 0.0422 | 1.0 |
| 0.0092 | 40.0 | 120 | 0.0415 | 1.0 |
| 0.0083 | 41.0 | 123 | 0.0409 | 1.0 |
| 0.0094 | 42.0 | 126 | 0.0404 | 1.0 |
| 0.0084 | 43.0 | 129 | 0.0400 | 1.0 |
| 0.0085 | 44.0 | 132 | 0.0396 | 1.0 |
| 0.0092 | 45.0 | 135 | 0.0392 | 1.0 |
| 0.0076 | 46.0 | 138 | 0.0389 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.0388 | 1.0 |
| 0.0085 | 48.0 | 144 | 0.0387 | 1.0 |
| 0.0071 | 49.0 | 147 | 0.0386 | 1.0 |
| 0.0079 | 50.0 | 150 | 0.0386 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__tweet_eval_stance__all-train | a2c5535e0d9a914022b3fd38952379de3f8362dc | 2022-01-26T21:01:20.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__tweet_eval_stance__all-train | 5 | null | transformers | 16,289 | Entry not found |
SharanSMenon/22-languages-bert-base-cased | 4f721dcc6a206f2039e1540ebe304bf134952663 | 2022-01-15T19:54:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | SharanSMenon | null | SharanSMenon/22-languages-bert-base-cased | 5 | 2 | transformers | 16,290 | ---
metrics:
- accuracy
widget:
- text: "In war resolution, in defeat defiance, in victory magnanimity"
- text: "en la guerra resolución en la derrota desafío en la victoria magnanimidad"
---
[](https://colab.research.google.com/drive/1dqeUwS_DZ-urrmYzB29nTCBUltwJxhbh?usp=sharing)
# 22 Language Identifier - BERT
This model is trained to identify the following 22 different languages.
- Arabic
- Chinese
- Dutch
- English
- Estonian
- French
- Hindi
- Indonesian
- Japanese
- Korean
- Latin
- Persian
- Portugese
- Pushto
- Romanian
- Russian
- Spanish
- Swedish
- Tamil
- Thai
- Turkish
- Urdu
## Loading the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("SharanSMenon/22-languages-bert-base-cased")
model = AutoModelForSequenceClassification.from_pretrained("SharanSMenon/22-languages-bert-base-cased")
```
## Inference
```python
def predict(sentence):
tokenized = tokenizer(sentence, return_tensors="pt")
outputs = model(**tokenized)
return model.config.id2label[outputs.logits.argmax(dim=1).item()]
```
### Examples
```python
sentence1 = "in war resolution, in defeat defiance, in victory magnanimity"
predict(sentence1) # English
sentence2 = "en la guerra resolución en la derrota desafío en la victoria magnanimidad"
predict(sentence2) # Spanish
sentence3 = "هذا هو أعظم إله على الإطلاق"
predict(sentence3) # Arabic
``` |
SoLID/sgd-input-plan-constructor | 3486f321ba871d3cd76f1bde03b09216dda9f988 | 2021-12-30T10:00:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | SoLID | null | SoLID/sgd-input-plan-constructor | 5 | null | transformers | 16,291 | Entry not found |
SongRb/distilbert-base-uncased-finetuned-ner | d0f43d5fdc0593beb636232fa47e914609092272 | 2021-08-31T10:59:42.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | SongRb | null | SongRb/distilbert-base-uncased-finetuned-ner | 5 | null | transformers | 16,292 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9850826886110537
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0746
- Precision: 0.9347
- Recall: 0.9426
- F1: 0.9386
- Accuracy: 0.9851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0832 | 1.0 | 3511 | 0.0701 | 0.9317 | 0.9249 | 0.9283 | 0.9827 |
| 0.0384 | 2.0 | 7022 | 0.0701 | 0.9282 | 0.9410 | 0.9346 | 0.9845 |
| 0.0222 | 3.0 | 10533 | 0.0746 | 0.9347 | 0.9426 | 0.9386 | 0.9851 |
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.3
|
SophieTr/PPO_training | 8b24f77d6b4d12e6f0398aae75b226fb782aea28 | 2022-04-16T06:00:08.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | SophieTr | null | SophieTr/PPO_training | 5 | null | transformers | 16,293 | Entry not found |
Sunbird/sunbird-mul-en | 752f122551f006479555553ebac2196fd5c705b4 | 2022-01-05T15:24:57.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Sunbird | null | Sunbird/sunbird-mul-en | 5 | null | transformers | 16,294 | Entry not found |
SuperAI2-Machima/mt5-small-thai-qg | 5e119d28031a230c1157f546ad60420725a49c11 | 2022-02-23T06:20:38.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"thai",
"th",
"dataset:NSC2018",
"dataset:wiki-documents-nsc",
"dataset:ThaiQACorpus-DevelopmentDataset",
"transformers",
"question-generation",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | SuperAI2-Machima | null | SuperAI2-Machima/mt5-small-thai-qg | 5 | 4 | transformers | 16,295 | ---
tags:
- question-generation
language:
- thai
- th
datasets:
- NSC2018
- wiki-documents-nsc
- ThaiQACorpus-DevelopmentDataset
widget:
- text: "โรงเรียนบ้านขุนด่าน ตั้งอยู่ที่ขุนด่าน จ.นครนายก"
example_title: "Example 01"
- text: "พลเอก ประยุทธ์ จันทร์โอชา (เกิด 21 มีนาคม พ.ศ. 2497) ชื่อเล่น ตู่ เป็นนักการเมืองและอดีตนายทหารบกชาวไทย"
example_title: "Example 02"
- text: "วันที่ 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น"
example_title: "Example 03"
license: mit
---
[SuperAI Engineer Season 2](https://superai.aiat.or.th/) , [Machima](https://machchima.superai.me/)
[Google's mT5](https://github.com/google-research/multilingual-t5) , [Pollawat](https://huggingface.co/Pollawat/mt5-small-thai-qg)
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config
model = T5ForConditionalGeneration.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg')
tokenizer = T5Tokenizer.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg')
source_text = 'บุกยึดไม้เถื่อน อดีต ส.ส.บุรีรัมย์ เตรียมสร้างคฤหาสน์ทรงไทย 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น'
print('Predicted Summary Text : ')
tokenized_text = tokenizer.encode(source_text, return_tensors="pt").to(device)
summary_ids = model.generate(tokenized_text,
num_beams=4,
no_repeat_ngram_size=2,
max_length=50,
early_stopping=True)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
#Predicted Summary Text :
#answer: 80 แผ่น question: ตํารวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่ากี่แผ่น
``` |
TehranNLP/bert-base-uncased-mnli | 02962fb786a52d0d7d4025386b4d706c9d0a8b6d | 2021-06-03T10:44:09.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | TehranNLP | null | TehranNLP/bert-base-uncased-mnli | 5 | null | transformers | 16,296 | Entry not found |
TehranNLP-org/albert-base-v2-avg-mnli | 48cd20c15492c7cdce2b56169ea1cb8e000334ff | 2021-07-07T07:39:48.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | TehranNLP-org | null | TehranNLP-org/albert-base-v2-avg-mnli | 5 | null | transformers | 16,297 | Entry not found |
TehranNLP-org/bert-base-cased-avg-mnli | 328781cdef3395b7ec3afea79607b44b34668ae4 | 2021-07-06T19:15:01.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | TehranNLP-org | null | TehranNLP-org/bert-base-cased-avg-mnli | 5 | null | transformers | 16,298 | Entry not found |
TehranNLP-org/bert-base-uncased-avg-mnli | fda7128556981fa40859edcf477bf609f97cc5e2 | 2021-07-06T22:54:16.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | TehranNLP-org | null | TehranNLP-org/bert-base-uncased-avg-mnli | 5 | null | transformers | 16,299 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.