modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Necrozma/harrypotterbot | 74d60e7c7dd0cbb2e93dfcbdc99843cec5ec5c54 | 2021-12-10T15:14:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Necrozma | null | Necrozma/harrypotterbot | 2 | null | transformers | 23,300 | ---
tags:
- conversational
---
# Harry potter |
Nevena/test-model-1 | 0768e179bbc090c1de0b54d324eec8acb799ace4 | 2021-11-17T11:04:11.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | Nevena | null | Nevena/test-model-1 | 2 | null | transformers | 23,301 | Entry not found |
NibrasShami/DialopGPT-small-HarryPotter | a06a146b443580e8ab724bbc75570a1c5bc930db | 2021-09-25T19:56:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | NibrasShami | null | NibrasShami/DialopGPT-small-HarryPotter | 2 | null | transformers | 23,302 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
Norrawee/wangchanberta-w20 | 2fa74b2b3ee0a049b5bfbdf32b3bb4e0161204dc | 2022-02-16T16:12:18.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Norrawee | null | Norrawee/wangchanberta-w20 | 2 | null | transformers | 23,303 | Entry not found |
Norrawee/wangchanberta-w50 | 744247e2186e39dda2ce9fecdd03ad392e64fc1a | 2022-02-17T15:01:54.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Norrawee | null | Norrawee/wangchanberta-w50 | 2 | null | transformers | 23,304 | Entry not found |
Nova/DialoGPT-medium-Lelouch | 6311406a5fceda263a28b8737be97c9abda330a8 | 2021-09-09T11:40:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Nova | null | Nova/DialoGPT-medium-Lelouch | 2 | null | transformers | 23,305 | ---
tags:
- conversational
---
#Lelouch DialoGPT model |
NovaChrono/twervy | 9a4d7386ced9780c2a8386589b0c61239099e2e4 | 2021-06-03T11:55:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | NovaChrono | null | NovaChrono/twervy | 2 | null | transformers | 23,306 | ---
tags:
- conversational
---
# My Awesome Model |
Numenta/BertSparse80 | efe4ce93fdcc5b21c2d03d3564305c7db516bc28 | 2021-12-03T23:11:04.000Z | [
"pytorch"
] | null | false | Numenta | null | Numenta/BertSparse80 | 2 | null | null | 23,307 | Entry not found |
Numenta/BertSparse90 | c0399bcc6d18666c496d766ce69d97b7a4507a09 | 2021-12-03T23:19:33.000Z | [
"pytorch"
] | null | false | Numenta | null | Numenta/BertSparse90 | 2 | null | null | 23,308 | Entry not found |
Ogayo/mt-adh-en | eda3525392cf49b25b3cf45a10b8d09f786368db | 2021-04-23T05:48:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ogayo | null | Ogayo/mt-adh-en | 2 | null | transformers | 23,309 | Entry not found |
Palak/albert-large-v2_squad | 2eff1e6d5d03f0cef961cdd1c33de90fb879a795 | 2021-12-24T18:13:12.000Z | [
"pytorch",
"albert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Palak | null | Palak/albert-large-v2_squad | 2 | null | transformers | 23,310 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: albert-large-v2_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v2_squad
This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the **squadV1** dataset.
- "eval_exact_match": 84.80605487228004
- "eval_f1": 91.80638438705844
- "eval_samples": 10808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Palak/distilroberta-base_squad | 1d1487fa0fe494f5b5642c19941260d3668a4d8e | 2021-12-24T18:22:38.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Palak | null | Palak/distilroberta-base_squad | 2 | null | transformers | 23,311 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilroberta-base_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base_squad
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the **squadV1** dataset.
- "eval_exact_match": 80.97445600756859
- "eval_f1": 88.0153886332912
- "eval_samples": 10790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Palak/xlm-roberta-base_squad | 81c283c73d51306e642f7dfb27cd5634971e5509 | 2021-12-25T11:05:12.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | Palak | null | Palak/xlm-roberta-base_squad | 2 | 1 | transformers | 23,312 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: xlm-roberta-base_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_squad
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
- "eval_exact_match": 82.69631031220435
- "eval_f1": 89.4562841806503
- "eval_samples": 10918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
PaulLerner/multi_passage_bert_triviaqa_without_viquae | 0afa2220e7b0f6b0add65d66f3b82ae1041e98be | 2022-02-18T13:50:47.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | PaulLerner | null | PaulLerner/multi_passage_bert_triviaqa_without_viquae | 2 | null | transformers | 23,313 | Entry not found |
PedroR/xlm-roberta-6-pretrained | 9464872e18c4dcf9a5872ce96c1a38ae106998cf | 2021-07-29T10:55:13.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | PedroR | null | PedroR/xlm-roberta-6-pretrained | 2 | null | transformers | 23,314 | Entry not found |
PedroR/xlm-roberta-7-final | 445a3bcc1f0a1c9b8f658bfd8e6ffb7dd95fde19 | 2021-07-29T17:23:10.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | PedroR | null | PedroR/xlm-roberta-7-final | 2 | null | transformers | 23,315 | Entry not found |
Peter/medium | d12c7042d86d41eb1881b363f746cae67dba39ec | 2022-01-08T01:14:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Peter | null | Peter/medium | 2 | null | transformers | 23,316 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: medium
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medium
This model is a fine-tuned version of [prithivida/parrot_paraphraser_on_T5](https://huggingface.co/prithivida/parrot_paraphraser_on_T5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6025
- Rouge1: 81.6007
- Rouge2: 75.1196
- Rougel: 81.4213
- Rougelsum: 81.4956
- Gen Len: 32.4286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 63 | 0.5775 | 65.0748 | 58.8985 | 64.5731 | 63.6249 | 19.0 |
| No log | 2.0 | 126 | 0.5806 | 74.3055 | 69.2025 | 73.4922 | 73.0941 | 17.8571 |
| No log | 3.0 | 189 | 0.6025 | 71.3808 | 66.0359 | 70.1235 | 69.4614 | 18.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Plimpton/distilbert-base-uncased-finetuned-squad | b32ffaa88a9cf0dd8b6550b4b24d9a42d86eae7a | 2021-11-24T17:15:45.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Plimpton | null | Plimpton/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 23,317 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5169 | 1.0 | 1642 | 1.6958 |
| 1.1326 | 2.0 | 3284 | 2.0009 |
| 0.8638 | 3.0 | 4926 | 2.4285 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
PremalMatalia/electra-base-best-squad2 | ea15851bb9485b076d7969c2624015333265f620 | 2021-08-04T18:53:58.000Z | [
"pytorch",
"electra",
"question-answering",
"dataset:squad_v2",
"transformers",
"autotrain_compatible"
] | question-answering | false | PremalMatalia | null | PremalMatalia/electra-base-best-squad2 | 2 | 2 | transformers | 23,318 | ---
datasets:
- squad_v2
---
# ELECTRA-base for QA
## Overview
**Language model:** electra-base </br>
**Language:** English </br>
**Downstream-task:** Extractive QA </br>
**Training data:** SQuAD 2.0 </br>
**Eval data:** SQuAD 2.0 </br>
**Code:** <TBD> </br>
## Env Information
`transformers` version: 4.9.1 </br>
Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>
Python version: 3.7.11 </br>
PyTorch version (GPU?): 1.9.0+cu102 (False)</br>
Tensorflow version (GPU?): 2.5.0 (False)</br>
## Hyperparameters
```
max_seq_len=386
doc_stride=128
n_best_size=20
max_answer_length=30
min_null_score=7.0
batch_size=8
n_epochs=2
base_LM_model = "google/electra-base-discriminator"
learning_rate=1.5e-5
adam_epsilon=1e-5
adam_beta1=0.95
adam_beta2=0.999
warmup_steps=100
weight_decay=0.01
optimizer=AdamW
lr_scheduler="polynomial"
```
##### There is a special threshold value CLS_threshold=-3 used to more accurately identify no answers [Logic will be available in GitHub Repo [TBD]
## Performance
```
"exact": 79.331256
"f1": 83.232347\t
"total": 11873
"HasAns_exact": 76.501350
"HasAns_f1": 84.314719
"HasAns_total": 5928
"NoAns_exact": 82.153070
"NoAns_f1": 82.153070
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "PremalMatalia/electra-base-best-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Which name is also used to describe the Amazon rainforest in English?',
'context': 'The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet\'s remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.'
}
res = nlp(QA_input)
print(res)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Premal Matalia |
Priyajay/xls-r-ab-test | f32444a8f213555ca18bd4cf36859b2b65b2167f | 2022-02-01T04:29:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | Priyajay | null | Priyajay/xls-r-ab-test | 2 | null | transformers | 23,319 | ---
language:
- hi
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 248.1278
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Priyajay/xls-r-kn-test | 67769995639549acf33ad7b7a365cc588471f4d8 | 2022-02-01T03:58:52.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Priyajay | null | Priyajay/xls-r-kn-test | 2 | null | transformers | 23,320 | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 26.7866
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
PurpleJacketGuy/My_Jarvis | 24abbc5faf33a0b7d2fb9d3ea11680013ec21757 | 2021-11-17T20:26:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | PurpleJacketGuy | null | PurpleJacketGuy/My_Jarvis | 2 | null | transformers | 23,321 | ---
tags:
- conversational
---
# Jarvis DialoGPT Model |
Pyke/DS-config-1 | 66468fab77c1d779822ec4ec8dfc2fe1896b4b2e | 2021-08-18T17:26:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/DS-config-1 | 2 | null | transformers | 23,322 | Entry not found |
Pyke/DS-config-14 | 908e0726aad5c6a4d0517f792f7204ac70c68e5c | 2021-08-22T12:46:12.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/DS-config-14 | 2 | null | transformers | 23,323 | Entry not found |
Pyke/DS-config-3 | 8f97650d7060d67eba77d31f4718cc5c6c62a298 | 2021-08-18T17:43:34.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/DS-config-3 | 2 | null | transformers | 23,324 | Entry not found |
Pyke/DS-config-4 | fe57c16dca7e1bcda9d36278bfd5d53e2f0ff323 | 2021-08-18T17:52:30.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/DS-config-4 | 2 | null | transformers | 23,325 | Entry not found |
Pyke/DS-config-7 | c9d2721d7ebf1f90ccc21e81eb92e8320a86075e | 2021-08-19T17:11:11.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/DS-config-7 | 2 | null | transformers | 23,326 | Entry not found |
Pyke/DS-config-9 | 212d9795d2fc454fbcb9f77c95ded197be4a05ef | 2021-08-21T18:28:43.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/DS-config-9 | 2 | null | transformers | 23,327 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-04 | ea08c56cb405fb2850d035281872cbdb61a2cfaa | 2021-08-17T13:56:45.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-04 | 2 | null | transformers | 23,328 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test001 | e1bbd260555b283380ce4bdfe4987f2ffba89e05 | 2021-08-16T16:17:23.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test001 | 2 | null | transformers | 23,329 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test003 | 5a508d672927f3c2a4c49dcee9aac0ed13e0d098 | 2021-08-16T16:23:30.000Z | [
"pytorch",
"bart",
"feature-extraction",
"transformers"
] | feature-extraction | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test003 | 2 | null | transformers | 23,330 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test004 | 749a9b8c8de3a2b14df500fd91a74e396c999a83 | 2021-08-16T16:25:29.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test004 | 2 | null | transformers | 23,331 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test005 | a75c2901776140c627fb5a76dc2593880952b9b4 | 2021-08-16T16:27:29.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test005 | 2 | null | transformers | 23,332 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test1 | 1a9414ee209201e7bae7bfa6d7347953df034ebf | 2021-08-13T18:20:14.000Z | [
"pytorch",
"bart",
"transformers"
] | null | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test1 | 2 | null | transformers | 23,333 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test10 | c21603014b0d6ec5f9c8ee75348552233bfdf8d5 | 2021-08-15T17:45:20.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test10 | 2 | null | transformers | 23,334 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test12 | 8c3f07d201419611bbc72feac9403cfe59275302 | 2021-08-15T18:14:29.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test12 | 2 | null | transformers | 23,335 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test30 | 597e6b49d95ebda750c5c572ad92a67c68774e62 | 2021-08-16T15:41:37.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test30 | 2 | null | transformers | 23,336 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test33 | 95db5b34fdcc8ae30a61047c5a73b9a62f653f67 | 2021-08-16T15:57:40.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test33 | 2 | null | transformers | 23,337 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test6 | a7672913cdd40de21982a5eb77382f8b78a06857 | 2021-08-14T18:07:52.000Z | [
"pytorch",
"bart",
"transformers"
] | null | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test6 | 2 | null | transformers | 23,338 | Entry not found |
RAPIDS/distilbert-cyberlogs | 7eb44abb44b0d9e7faa9e8748df975d893b59083 | 2020-10-23T19:46:08.000Z | [
"pytorch",
"distilbert",
"transformers"
] | null | false | RAPIDS | null | RAPIDS/distilbert-cyberlogs | 2 | null | transformers | 23,339 | Entry not found |
RASMUS/wav2vec2-xlsr-300 | 6ab91de05abb0aabf8636b40939dffb3b0800172 | 2022-01-15T22:33:05.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | RASMUS | null | RASMUS/wav2vec2-xlsr-300 | 2 | null | transformers | 23,340 | Entry not found |
RAhul03/DialoGPT-small-harrypotter | d4a5199e2d70bf9f38a82af309c40d40b7916d36 | 2021-09-08T15:55:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | RAhul03 | null | RAhul03/DialoGPT-small-harrypotter | 2 | null | transformers | 23,341 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
REAP3R/Chat-bot | a906aefc3f6dcebfc46579810a7a9d56f41067b8 | 2021-09-25T13:56:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | REAP3R | null | REAP3R/Chat-bot | 2 | null | transformers | 23,342 | ---
tags:
- conversational
---
# chatbot |
Rafat/wav2vec2-base-timit-demo-colab | 35cbd974c7d64844f9617e790996d7a6bd5d312f | 2022-02-15T01:18:00.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Rafat | null | Rafat/wav2vec2-base-timit-demo-colab | 2 | null | transformers | 23,343 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4229
- Wer: 0.2386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5486 | 4.0 | 500 | 2.1672 | 0.9876 |
| 0.6819 | 8.0 | 1000 | 0.4502 | 0.3301 |
| 0.2353 | 12.0 | 1500 | 0.4352 | 0.2841 |
| 0.1427 | 16.0 | 2000 | 0.4237 | 0.2584 |
| 0.0945 | 20.0 | 2500 | 0.4409 | 0.2545 |
| 0.0671 | 24.0 | 3000 | 0.4257 | 0.2413 |
| 0.0492 | 28.0 | 3500 | 0.4229 | 0.2386 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
RahulRaman/Malayalam-LM-Electra | 0915c978a9bbe989ac6358926f061558db4f245f | 2022-01-25T14:57:25.000Z | [
"pytorch"
] | null | false | RahulRaman | null | RahulRaman/Malayalam-LM-Electra | 2 | null | null | 23,344 | Entry not found |
RahulRaman/Malayalam-LM-RoBERTa | 66ef506b4dba2885dd3e8959257c3cd2ad5f8b84 | 2022-02-04T12:59:42.000Z | [
"pytorch"
] | null | false | RahulRaman | null | RahulRaman/Malayalam-LM-RoBERTa | 2 | null | null | 23,345 | Entry not found |
Rai220/test1 | 8995760993ad0f06a26092a4651a18d84f2d0f1f | 2021-05-21T11:09:03.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Rai220 | null | Rai220/test1 | 2 | null | transformers | 23,346 | Entry not found |
Rainiefantasy/GO1984_BERTUncased | 10c05e981d9d00b85ea724acd739c4c7d80d2c9b | 2021-09-14T17:38:06.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Rainiefantasy | null | Rainiefantasy/GO1984_BERTUncased | 2 | 1 | transformers | 23,347 | Entry not found |
Rajaram1996/wav2vec2-large-xlsr-53-tamil | 5a848fdfb106a015443dd7e3efe694217a56c808 | 2022-05-24T14:33:26.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ta",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Rajaram1996 | null | Rajaram1996/wav2vec2-large-xlsr-53-tamil | 2 | null | transformers | 23,348 | ---
language:
- ta
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: Rajaram1996/wav2vec2-large-xlsr-53-tamil
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ta
type: common_voice
args: ta
metrics:
- name: Test WER
type: wer
value: 69.76
---
# Wav2Vec2-Large-XLSR-53-tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Rajaram1996/wav2vec2-large-xlsr-53-tamil")
model = Wav2Vec2ForCTC.from_pretrained("Rajaram1996/wav2vec2-large-xlsr-53-tamil")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Rajaram1996/wav2vec2-large-xlsr-53-tamil")
model = Wav2Vec2ForCTC.from_pretrained("Rajaram1996/wav2vec2-large-xlsr-53-tamil")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 69.76 % |
Ramnathan/wav2vec2 | 5d3f5cf1c054ef3cc5f4e55c266e97c025d0a0f5 | 2021-07-15T13:52:12.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Ramnathan | null | Ramnathan/wav2vec2 | 2 | null | transformers | 23,349 | Entry not found |
Ranger/Dial0GPT-small-harrypotter | 52e73656f9081e4871782193a061398e027131fb | 2021-10-22T06:17:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Ranger | null | Ranger/Dial0GPT-small-harrypotter | 2 | null | transformers | 23,350 | Entry not found |
RaphBL/great-model | 2fcf4d7bdefe6425772e1457a6112b33e91142ac | 2021-05-27T16:34:11.000Z | [
"pytorch",
"camembert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | RaphBL | null | RaphBL/great-model | 2 | null | transformers | 23,351 |
GreatModel does not solve any NLP problem ... for exercise purpose only.
|
Ravika/roberta-base-finetuned | e1f3080143ea0fbcc373b518a1729a3b9e2b97e6 | 2021-12-04T04:28:43.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Ravika | null | Ravika/roberta-base-finetuned | 2 | null | transformers | 23,352 | Entry not found |
Raviraj/Raviraj-bert | 3e3ec1d5d13c0cc077ddf7fc847033bc62aa68cd | 2022-01-08T15:44:44.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Raviraj | null | Raviraj/Raviraj-bert | 2 | null | transformers | 23,353 | Entry not found |
Raviraj/xlm-roberta-large-MLMfintune-hi-fraudcall | a7108671d96035d9f4fe98b66cabeebeb40703d2 | 2022-01-14T09:24:06.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Raviraj | null | Raviraj/xlm-roberta-large-MLMfintune-hi-fraudcall | 2 | null | transformers | 23,354 |
This model is finetuned for masked language modeling.
I have used xlm-roberta-large model for pretraining over half a million tokens of
Hindi fraud call transcripts.
You can import this model with pretrained() method from the transformer library.
please note this works well on general Hindi but it's result on native language dialogues are enhanced
in comparison to general libraries. |
Razvanip/wav2vec2-base-timit-demo-colab | 684cd4ea91ab2bc23d756b46ce54a727799c1a08 | 2022-01-12T15:06:21.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Razvanip | null | Razvanip/wav2vec2-base-timit-demo-colab | 2 | null | transformers | 23,355 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7195
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.0306 | 0.8 | 100 | 3.0392 | 1.0 |
| 2.9429 | 1.6 | 200 | 3.2416 | 1.0 |
| 2.7792 | 2.4 | 300 | 2.7195 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Redolid/DialoGPT-small-Rick | e9549beb5bd46f0185c4a889ffccd030853ed8d3 | 2021-08-28T18:16:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Redolid | null | Redolid/DialoGPT-small-Rick | 2 | 1 | transformers | 23,356 | ---
tags:
- conversational
---
#Rick DialoGPT Model.
>Following https://github.com/RuolinZheng08/twewy-discord-chatbot Tutorial. |
RenZHU/t5-small-finetuned-xsum | 2b027b118bf561e44f7153a0be147dcec0ea225d | 2022-01-09T03:09:55.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | RenZHU | null | RenZHU/t5-small-finetuned-xsum | 2 | 1 | transformers | 23,357 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5310
- Rouge1: 27.9232
- Rouge2: 7.5324
- Rougel: 22.035
- Rougelsum: 22.0304
- Gen Len: 18.8116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 2.7564 | 1.0 | 51012 | 2.5310 | 27.9232 | 7.5324 | 22.035 | 22.0304 | 18.8116 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
RifsxD/DialoGPT-medium-raifu | f24f31b4c89d80002bb2d374fa244f75d64ef6cc | 2021-06-03T11:27:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | RifsxD | null | RifsxD/DialoGPT-medium-raifu | 2 | null | transformers | 23,358 | ---
tags:
- conversational
---
# My Awesome Model |
Ritchie/DialoGPT-small-Rickandmorty | cdf45fddd09e0e1eb7930713cc0a75c5695d9959 | 2021-08-27T15:20:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Ritchie | null | Ritchie/DialoGPT-small-Rickandmorty | 2 | null | transformers | 23,359 | ---
tags:
- conversational
---
# Rick and Morty DialoGPT Model |
RizqFarIDN/DialoGPT-medium-harrypotter | 3a394dcee73a277be38936ff82569c1de69a980f | 2021-11-25T09:20:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | RizqFarIDN | null | RizqFarIDN/DialoGPT-medium-harrypotter | 2 | null | transformers | 23,360 | ---
tags:
- conversational
---
#harry potter DialoGPT model |
RobinMari/DialoGPT-small-mikoto | 21a28b0cb7217c3016a4b396e194601854e85b59 | 2021-11-04T04:41:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | RobinMari | null | RobinMari/DialoGPT-small-mikoto | 2 | null | transformers | 23,361 | ---
tags:
- conversational
---
# Mikoto Jinba DialoGPT Model |
Rolv-Arild/xls-r-300m-npsc-3 | bf5f289f21d76994651fe14722bf0a0ca7001697 | 2022-02-02T12:29:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Rolv-Arild | null | Rolv-Arild/xls-r-300m-npsc-3 | 2 | null | transformers | 23,362 | Entry not found |
Roy029/distilroberta-base-finetuned-wikitext2 | f132b033a2b64bc677b84eff57daa9a02a60487d | 2021-11-03T15:01:48.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Roy029 | null | Roy029/distilroberta-base-finetuned-wikitext2 | 2 | null | transformers | 23,363 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 58 | 2.2650 |
| No log | 2.0 | 116 | 2.2408 |
| No log | 3.0 | 174 | 2.1696 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Roy029/japanese-roberta-base-finetuned-wikitext2 | c541dee3ce89207128577a3d32e3f2c1353ab08b | 2021-11-04T05:25:22.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Roy029 | null | Roy029/japanese-roberta-base-finetuned-wikitext2 | 2 | null | transformers | 23,364 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: japanese-roberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# japanese-roberta-base-finetuned-wikitext2
This model is a fine-tuned version of [rinna/japanese-roberta-base](https://huggingface.co/rinna/japanese-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 18 | 3.4128 |
| No log | 2.0 | 36 | 3.1374 |
| No log | 3.0 | 54 | 3.2285 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
RuudVelo/XLSR-Wav2Vec2-Maltese-1 | 93577ffbd40f51c9c080f51fb4379ef258cd90f3 | 2021-07-05T17:21:59.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"mt",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | RuudVelo | null | RuudVelo/XLSR-Wav2Vec2-Maltese-1 | 2 | null | transformers | 23,365 | ---
language: mt
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Maltese by RuudVelo
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mt
type: common_voice
args: mt
metrics:
- name: Test WER
type: wer
value: 30.0
---
## Evaluation on Common Voice Maltese Test
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "RuudVelo/XLSR-Wav2Vec2-Maltese-1"
device = "cuda"
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�]'
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "mt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Result**: 30.0 % |
RuudVelo/wav2vec2-large-xls-r-1b-cv8-mt | 57c6387c9ba8abe6959a7ccc9850e06e6cb22da2 | 2022-03-24T11:57:36.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mt",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | RuudVelo | null | RuudVelo/wav2vec2-large-xls-r-1b-cv8-mt | 2 | null | transformers | 23,366 | ---
language:
- mt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- mt
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-1b-cv8-mt
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: mt
metrics:
- name: Test WER
type: wer
value: 17.57
- name: Test CER
type: cer
value: 3.86
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: mt
metrics:
- name: Test WER
type: wer
value: null
- name: Test CER
type: cer
value: null
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-cv8-mt
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2210
- Wer: 0.1974
## Model description
Note: another version of this model is available with a KenLM 3gram model. This model performs better than this model. See https://huggingface.co/RuudVelo/wav2vec2-large-xls-r-1b-cv8-mt-lm
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following config and hyperparameters were used during training:
model = Wav2Vec2ForCTC.from_pretrained(
"facebook/wav2vec2-xls-r-1b",
attention_dropout=0.05,
hidden_dropout=0.05,
feat_proj_dropout=0.05,
mask_time_prob=0.55,
mask_feature_prob=0.10,
layerdrop=0.05,
ctc_zero_infinity=True,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
vocab_size=len(processor.tokenizer),
)
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir=repo_name,
group_by_length=True,
per_device_train_batch_size=32,
gradient_accumulation_steps=2,
evaluation_strategy="steps",
num_train_epochs=50,
gradient_checkpointing=True,
fp16=True,
save_steps=400,
eval_steps=400,
logging_steps=400,
learning_rate=5.5e-05,
warmup_steps=500,
save_total_limit=2,
push_to_hub=True,
report_to="tensorboard")
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4564 | 13.33 | 400 | 0.3783 | 0.3981 |
| 0.7931 | 26.66 | 800 | 0.2377 | 0.2298 |
| 0.5364 | 39.98 | 1200 | 0.2210 | 0.1974 |
Note that the test WER of 19.74 is different than the above reported 17.57. This was due to a bug which was found while processing files with an older version of the datasets library. The right library is listed below.
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
RuudVelo/wav2vec2-large-xlsr-53-frisian | 71168482ff1ed147f8516e85ffbdcc4d11d47a88 | 2021-07-05T17:26:15.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"fy-NL",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | RuudVelo | null | RuudVelo/wav2vec2-large-xlsr-53-frisian | 2 | null | transformers | 23,367 | ---
language: fy-NL
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: wav2vec2-large-xlsr-53-frisian by RuudVelo
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fy-NL
type: common_voice
args: fy-NL
metrics:
- name: Test WER
type: wer
value: 18.73
---
## Evaluation on Common Voice Frisian Test
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "RuudVelo/wav2vec2-large-xlsr-53-frisian"
device = "cuda"
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "fy-NL", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Result**: 18.73 % |
S34NtheGuy/DialoGPT-small-MJOLNIR_Soul | 82272d9c36949e6dca20bd35c67e49eec44da8c8 | 2021-10-10T18:37:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | S34NtheGuy | null | S34NtheGuy/DialoGPT-small-MJOLNIR_Soul | 2 | null | transformers | 23,368 | ---
tags:
- conversational
---
# DialoGPT chat bot model using discord messages as data |
SCS/Fine-tuned-JCSD | ad34266cad9032f21f37f99cfb0307ebae8d7851 | 2021-08-14T02:27:58.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SCS | null | SCS/Fine-tuned-JCSD | 2 | null | transformers | 23,369 | Entry not found |
SCS/Fine-tuned-PCSD | 3e3b1a35c6055b25f1903c9129d74d2c06be4b6f | 2021-08-13T10:02:33.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SCS | null | SCS/Fine-tuned-PCSD | 2 | null | transformers | 23,370 | Entry not found |
SEBIS/code_trans_t5_base_api_generation_multitask_finetune | 0122785f80564982f19e045e38832f712cfb190e | 2021-06-23T04:01:50.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_api_generation_multitask_finetune | 2 | null | transformers | 23,371 | ---
tags:
- summarization
widget:
- text: "parse the uses licence node of this package , if any , and returns the license definition if theres"
---
# CodeTrans model for api recommendation generation
Pretrained model for api recommendation generation using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the api recommendation generation task for the java apis.
## Intended uses & limitations
The model could be used to generate api usage for the java programming tasks.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_api_generation_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_api_generation_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/api%20generation/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 320,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 68.71 |
| CodeTrans-ST-Base | 70.45 |
| CodeTrans-TF-Small | 68.90 |
| CodeTrans-TF-Base | 72.11 |
| CodeTrans-TF-Large | 73.26 |
| CodeTrans-MT-Small | 58.43 |
| CodeTrans-MT-Base | 67.97 |
| CodeTrans-MT-Large | 72.29 |
| CodeTrans-MT-TF-Small | 69.29 |
| CodeTrans-MT-TF-Base | 72.89 |
| CodeTrans-MT-TF-Large | **73.39** |
| State of the art | 54.42 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_code_documentation_generation_go_multitask | b61089b22cd74878c304d8a9f59e1d5b9faf47ac | 2021-06-23T04:13:43.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_documentation_generation_go_multitask | 2 | null | transformers | 23,372 | ---
tags:
- summarization
widget:
- text: "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"
---
# CodeTrans model for code documentation generation go
Pretrained model on programming language go using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_go_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_go_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/go/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 340,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_commit_generation_transfer_learning_finetune | 80dbce95edc24adc4a9498eeae0dd3c68ff7230d | 2021-06-23T05:01:57.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_commit_generation_transfer_learning_finetune | 2 | null | transformers | 23,373 | ---
tags:
- summarization
widget:
- text: "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
---
# CodeTrans model for git commit message generation
Pretrained model on git commit using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the git commit message generation task for the java commit changes.
## Intended uses & limitations
The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate git commit message using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_commit_generation_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_commit_generation_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/commit%20generation/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes.
## Evaluation results
For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 39.61 |
| CodeTrans-ST-Base | 38.67 |
| CodeTrans-TF-Small | 44.22 |
| CodeTrans-TF-Base | 44.17 |
| CodeTrans-TF-Large | **44.41** |
| CodeTrans-MT-Small | 36.17 |
| CodeTrans-MT-Base | 39.25 |
| CodeTrans-MT-Large | 41.18 |
| CodeTrans-MT-TF-Small | 43.96 |
| CodeTrans-MT-TF-Base | 44.19 |
| CodeTrans-MT-TF-Large | 44.34 |
| State of the art | 32.81 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_large_api_generation_multitask | facb8ccb7862a24d90c9df13d9a420d92ffb2f6f | 2021-06-23T05:40:22.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_api_generation_multitask | 2 | null | transformers | 23,374 | ---
tags:
- summarization
widget:
- text: "parse the uses licence node of this package , if any , and returns the license definition if theres"
---
# CodeTrans model for api recommendation generation
Pretrained model for api recommendation generation using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate api usage for the java programming tasks.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_api_generation_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_api_generation_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/api%20generation/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 68.71 |
| CodeTrans-ST-Base | 70.45 |
| CodeTrans-TF-Small | 68.90 |
| CodeTrans-TF-Base | 72.11 |
| CodeTrans-TF-Large | 73.26 |
| CodeTrans-MT-Small | 58.43 |
| CodeTrans-MT-Base | 67.97 |
| CodeTrans-MT-Large | 72.29 |
| CodeTrans-MT-TF-Small | 69.29 |
| CodeTrans-MT-TF-Base | 72.89 |
| CodeTrans-MT-TF-Large | **73.39** |
| State of the art | 54.42 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_large_api_generation_multitask_finetune | e322e6f300d3ac18da321e3a0e2a1753ecb37208 | 2021-06-23T05:45:56.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_api_generation_multitask_finetune | 2 | null | transformers | 23,375 | ---
tags:
- summarization
widget:
- text: "parse the uses licence node of this package , if any , and returns the license definition if theres"
---
# CodeTrans model for api recommendation generation
Pretrained model for api recommendation generation using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the api recommendation generation task for the java apis.
## Intended uses & limitations
The model could be used to generate api usage for the java programming tasks.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_api_generation_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_api_generation_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/api%20generation/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 130,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 68.71 |
| CodeTrans-ST-Base | 70.45 |
| CodeTrans-TF-Small | 68.90 |
| CodeTrans-TF-Base | 72.11 |
| CodeTrans-TF-Large | 73.26 |
| CodeTrans-MT-Small | 58.43 |
| CodeTrans-MT-Base | 67.97 |
| CodeTrans-MT-Large | 72.29 |
| CodeTrans-MT-TF-Small | 69.29 |
| CodeTrans-MT-TF-Base | 72.89 |
| CodeTrans-MT-TF-Large | **73.39** |
| State of the art | 54.42 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_large_code_documentation_generation_javascript_multitask | 115bd09d4b88324b683740a1f70e4443b8a633fa | 2021-06-23T06:56:03.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_code_documentation_generation_javascript_multitask | 2 | null | transformers | 23,376 | ---
tags:
- summarization
widget:
- text: "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"
---
# CodeTrans model for code documentation generation javascript
Pretrained model on programming language javascript using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_javascript_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_javascript_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/javascript/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask_finetune | 2ed9dcea29b10721f37680ffd468f130e123fb72 | 2021-06-23T07:21:51.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask_finetune | 2 | null | transformers | 23,377 | ---
tags:
- summarization
widget:
- text: "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
---
# CodeTrans model for code documentation generation php
Pretrained model on programming language php using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the php function/method.
## Intended uses & limitations
The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/php/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 8000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code.
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask | 874d13cd9b322200b58b0d17e3918558ddf8f701 | 2021-06-23T07:52:40.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask | 2 | null | transformers | 23,378 | ---
tags:
- summarization
widget:
- text: "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
---
# CodeTrans model for code documentation generation ruby
Pretrained model on programming language ruby using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/ruby/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 80,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. (We have trained in total 260,000 steps.)
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_large_program_synthese_multitask | aec73cd67d4989009025ea14ee00c02e8006bd7c | 2021-06-23T08:39:57.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_program_synthese_multitask | 2 | null | transformers | 23,379 | ---
tags:
- summarization
widget:
- text: "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
---
# CodeTrans model for program synthesis
Pretrained model on programming language lisp inspired DSL using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate lisp inspired DSL code given the human language description tasks.
### How to use
Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/program%20synthesis/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 220,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | LISP |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 89.43 |
| CodeTrans-ST-Base | 89.65 |
| CodeTrans-TF-Small | 90.30 |
| CodeTrans-TF-Base | 90.24 |
| CodeTrans-TF-Large | 90.21 |
| CodeTrans-MT-Small | 82.88 |
| CodeTrans-MT-Base | 86.99 |
| CodeTrans-MT-Large | 90.27 |
| CodeTrans-MT-TF-Small | **90.31** |
| CodeTrans-MT-TF-Base | 90.30 |
| CodeTrans-MT-TF-Large | 90.17 |
| State of the art | 85.80 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_large_program_synthese_multitask_finetune | 743182f78e2c1e51f543fba08fabc0b5cd0a8544 | 2021-06-23T08:45:32.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_program_synthese_multitask_finetune | 2 | null | transformers | 23,380 | ---
tags:
- summarization
widget:
- text: "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
---
# CodeTrans model for program synthesis
Pretrained model on programming language lisp inspired DSL using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the program synthesis task for the lisp inspired DSL code.
## Intended uses & limitations
The model could be used to generate lisp inspired DSL code given the human language description tasks.
### How to use
Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/program%20synthesis/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | LISP |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 89.43 |
| CodeTrans-ST-Base | 89.65 |
| CodeTrans-TF-Small | 90.30 |
| CodeTrans-TF-Base | 90.24 |
| CodeTrans-TF-Large | 90.21 |
| CodeTrans-MT-Small | 82.88 |
| CodeTrans-MT-Base | 86.99 |
| CodeTrans-MT-Large | 90.27 |
| CodeTrans-MT-TF-Small | **90.31** |
| CodeTrans-MT-TF-Base | 90.30 |
| CodeTrans-MT-TF-Large | 90.17 |
| State of the art | 85.80 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_large_source_code_summarization_csharp_multitask | 2e6a3e36f64c554151c2c47c482ae4a21c50b099 | 2021-06-23T08:57:24.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_source_code_summarization_csharp_multitask | 2 | null | transformers | 23,381 | ---
tags:
- summarization
widget:
- text: "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
---
# CodeTrans model for source code summarization csharp
Pretrained model on programming language csharp using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_csharp_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_csharp_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/csharp/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_code_documentation_generation_ruby_multitask | 90e0121737fbb3b02d7700ab1fb28099fd40102c | 2021-06-23T10:12:08.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_code_documentation_generation_ruby_multitask | 2 | null | transformers | 23,382 | ---
tags:
- summarization
widget:
- text: "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
---
# CodeTrans model for code documentation generation ruby
Pretrained model on programming language ruby using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/ruby/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 420,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_program_synthese_multitask_finetune | 90a067df0db67790112de3b06d8a7a16b76a37d3 | 2021-06-23T10:17:40.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_program_synthese_multitask_finetune | 2 | null | transformers | 23,383 | ---
tags:
- summarization
widget:
- text: "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
---
# CodeTrans model for program synthesis
Pretrained model on programming language lisp inspired DSL using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the program synthesis task for the lisp inspired DSL code.
## Intended uses & limitations
The model could be used to generate lisp inspired DSL code given the human language description tasks.
### How to use
Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/program%20synthesis/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 16,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | LISP |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 89.43 |
| CodeTrans-ST-Base | 89.65 |
| CodeTrans-TF-Small | 90.30 |
| CodeTrans-TF-Base | 90.24 |
| CodeTrans-TF-Large | 90.21 |
| CodeTrans-MT-Small | 82.88 |
| CodeTrans-MT-Base | 86.99 |
| CodeTrans-MT-Large | 90.27 |
| CodeTrans-MT-TF-Small | **90.31** |
| CodeTrans-MT-TF-Base | 90.30 |
| CodeTrans-MT-TF-Large | 90.17 |
| State of the art | 85.80 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_sql_transfer_learning_finetune | c211c1f11ae175b04aa2f7c6d1140b9c884e5d89 | 2021-06-23T10:26:05.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_source_code_summarization_sql_transfer_learning_finetune | 2 | null | transformers | 23,384 | ---
tags:
- summarization
widget:
- text: "select time ( col0 ) from tab0"
---
# CodeTrans model for source code summarization sql
Pretrained model on programming language sql using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the sql code snippets.
## Intended uses & limitations
The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "select time ( col0 ) from tab0"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/sql/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/legal_t5_small_cls_cs | 4abb1e6912f57496b1573dea236fbce80ddaffbe | 2021-06-23T10:27:21.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Cszech",
"dataset:jrc-acquis",
"transformers",
"classification Cszech model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_cs | 2 | null | transformers | 23,385 |
---
language: Cszech
tags:
- classification Cszech model
datasets:
- jrc-acquis
widget:
- text: "Bez námitek k navrhovanému spojení (Případ č. COMP/M.4169 – Virgin/CPW/JV) (2006/C 103/16) (Text s významem pro EHP) Dne 29. března 2006 se Komise rozhodla nevznést námitky proti výše uvedenému spojení a prohlásit ho za slučitelné se společným trhem. Toto rozhodnutí je založeno na čl. 6 odst. 1 písm. b) nařízení Rady (ES) č. 139/2004. Celý text rozhodnutí je přístupný pouze v angličtině a bude uveřejněn poté, co bude zbaven obchodního tajemství, které může případně obsahovat. Text bude dosažitelný: - na webové stránce Europa – hospodářská soutěž (http://europa.eu.int/comm/competition/mergers/cases/). Tato webová stránka umožňuje vyhledat jednotlivá rozhodnutí o spojení, a to včetně společnosti, čísla případu, data a indexu odvětví hospodářství. - v elektronické podobě na webové stránce EUR-Lex, pod dokumentem č. 32006M4169. EUR-Lex umožňuje přístup k Evropskému právu přes Internet. (http://europa.eu.int/eur-lex/lex) --------------------------------------------------"
---
# legal_t5_small_cls_cs model
Model for classification of legal text written in Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in Cszech.
### How to use
Here is how to use this model to classify legal text written in Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Bez námitek k navrhovanému spojení (Případ č. COMP/M.4169 – Virgin/CPW/JV) (2006/C 103/16) (Text s významem pro EHP) Dne 29. března 2006 se Komise rozhodla nevznést námitky proti výše uvedenému spojení a prohlásit ho za slučitelné se společným trhem. Toto rozhodnutí je založeno na čl. 6 odst. 1 písm. b) nařízení Rady (ES) č. 139/2004. Celý text rozhodnutí je přístupný pouze v angličtině a bude uveřejněn poté, co bude zbaven obchodního tajemství, které může případně obsahovat. Text bude dosažitelný: - na webové stránce Europa – hospodářská soutěž (http://europa.eu.int/comm/competition/mergers/cases/). Tato webová stránka umožňuje vyhledat jednotlivá rozhodnutí o spojení, a to včetně společnosti, čísla případu, data a indexu odvětví hospodářství. - v elektronické podobě na webové stránce EUR-Lex, pod dokumentem č. 32006M4169. EUR-Lex umožňuje přístup k Evropskému právu přes Internet. (http://europa.eu.int/eur-lex/lex) --------------------------------------------------"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_cls_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 18 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_cs | 0.6297|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_cls_es | 4a8ed74cafbaba23c123eab80b7ab05242020900 | 2021-06-23T10:29:12.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Spanish",
"dataset:jrc-acquis",
"transformers",
"classification Spanish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_es | 2 | null | transformers | 23,386 |
---
language: Spanish
tags:
- classification Spanish model
datasets:
- jrc-acquis
widget:
- text: "Reglamento (CE) no 90/2001 de la Comisión de 17 de enero de 2001 que modifica el Reglamento (CE) n° 800/1999 por el que se establecen disposiciones comunes de aplicación del régimen de restituciones por exportación de productos agrícolas LA COMISIÓN DE LAS COMUNIDADES EUROPEAS, Visto el Tratado constitutivo de la Comunidad Europea, Visto el Reglamento (CEE) n° 1766/92 del Consejo, de 30 de junio de 1992, por el que se establece la organización común de mercados en el sector de los cereales(1), cuya última modificación la constituye el Reglamento (CE) n° 1666/2000(2), y, en particular, sus artículos 13 y 21, así como las disposiciones correspondientes de los demás Reglamentos por los que se establecen organizaciones comunes de mercados de productos agrícolas, Considerando lo siguiente: (1) En el caso de exportación de productos presentados a granel o en unidades no normalizadas, en los que es evidente que la masa neta exacta de los productos no puede conocerse hasta después de cargar el medio de transporte, el apartado 6 del artículo 5 del Reglamento (CE) n° 800/1999 de la Comisión(3), modificado por el Reglamento (CE) n° 1557/2000(4) establece la aplicación de una reducción de la restitución cuando la masa neta efectivamente cargada sea inferior a un determinado porcentaje de la masa neta estimada. No obstante, para la aplicación de esta disposición conviene tener en cuenta las limitaciones inherentes a los medios de transporte de navegación marítima o interior. En efecto, en el caso de los productos exportados a granel, puede ocurrir que las cantidades declaradas no se carguen en su totalidad debido, en particular, a la decisión del responsable del medio de transporte que puede ordenar la suspensión de la carga por razones técnicas o debido a un exceso de carga imputable a los demás exportadores. (2) Dado que determinados cortes de carne de porcino no se presentan en embalajes ni son, por naturaleza, homogéneos, conviene ampliar la categoría de unidades no normalizadas a este tipo de productos. (3) En lo que respecta a la noción de lugar de carga, en el comercio de exportación de productos agrícolas se presenta una multitud de situaciones comerciales y administrativas; por consiguiente, es difícil establecer una norma única y conviene autorizar a los Estados miembros para que determinen el lugar más apropiado para efectuar los controles físicos para los productos agrícolas exportados que se benefician de una restitución. A estos efectos, parece justificado determinar el lugar de carga, de forma diferente, en función de que los productos sean cargados en contenedores o, por el contrario, a granel, en sacos o en cajas y no se carguen posteriormente en contenedores. Asimismo, es conveniente que, cuando existan motivos debidamente justificados, se permita que las autoridades aduaneras acepten para los productos agrícolas que se beneficien, de una restitución declaraciones de exportación presentadas en una oficina de aduanas que no sea la del lugar donde vayan a cargarse los productos. (4) En el caso de los productos sujetos al régimen de mercancías de retorno, es oportuno prever la posibilidad de que la reintroducción se efectúe, bien por el Estado miembros del que sean originarios los productos, bien por el Estado miembro exportador de la primera exportación. (5) Conviene modificar el Reglamento (CE) n° 800/1999 en consecuencia. (6) Las medidas previstas en el presente Reglamento se ajustan al dictamen de todos los Comités de gestión interesados. HA ADOPTADO EL PRESENTE REGLAMENTO: Artículo 1 El Reglamento (CE) n° 800/1999 se modificará como sigue: 1) En el apartado 6 del articulo 5, el párrafo tercero se sustituirá por el texto siguiente: %quot%No se concederá ninguna restitución por la cantidad que sobrepase el 110 % de la masa neta estimada. Cuando la masa efectivamente cargada sea inferior al 90 % de la masa neta estimada, la restitución por la masa neta efectivamente cargada se reducirá un 10 % en relación con la diferencia entre la restitución correspondiente al 90 % de la masa neta estimada y la restitución correspondiente a la masa efectivamente cargada. No obstante, en los casos de exportación par vía marítima o por vía navegable interior, la restitución se pagará por la masa neta efectivamente cargada cuando el exportador pueda aportar la prueba, refrendada por el responsable del medio de transporte, de que el hecho de que no se cargara la totalidad de sus mercancías se debió a las limitaciones inherentes a ese tipo de transporte o a un exceso de carga imputable a uno o a varios de los demás exportadores. En caso de que el exportador haya utilizado el procedimiento de domiciliación previsto en el artículo 283 del Reglamento (CEE) n° 2454/93 serán aplicables las disposiciones del presente párrafo siempre que las autoridades aduaneras hayan autorizado la rectificación de los documentos contables en los que los productos exportados hayan sido inscritos.%quot%. 2) En el apartado 6 del artículo 5, el párrafo cuarto se sustituirá por el texto siguiente: %quot%Se considerarán productos en unidades no estandarizadas los animales vivos, las (medias) canales, los cuartos, partes delanteras, jamones, paletillas, pechos y lomos.%quot%. 3) El apartado 7 del articulo 5 se sustituirá por el texto siguiente: %quot%7. Cualquier persona que exporte productos por los cuales solicite la concesión de la restitución estará obligada a lo siguiente: a) presentar la declaración de exportación en la oficina de aduanas competente del lugar en que los productos vayan a cargarse en el transporte que vaya a efectuar la exportación; b) informar a dicha oficina de aduanas, coma mínimo 24 horas antes del comienzo de las operaciones de carga, e indicar la duración prevista de las operaciones de carga; las autoridades competentes podrán modificar el plazo de 24 horas. Se podrá considerar como lugar de carga en el transporte de los productos destinados a la exportación: - en el caso de los productos que se exporten cargados en contenedores, el lugar donde se carguen en éstos las mercancías, - en el caso de los productos que se exporten a granel, en sacos, cajones, cajas, botellas, etc. sin cargarse en contenedores, el lugar donde se cargue el medio de transporte por el que las mercancías vayan a salir del territorio aduanero de la Comunidad. La oficina de aduanas competente podrá autorizar las operaciones de carga una vez aceptada la declaración de exportación y antes de finalizar el plazo a que se refiere la letra b). La oficina de aduanas competente deberá estar en condiciones de realizar el control físico y de aplicar las medidas de identificación necesarias para el transporte hacia la oficina de salida del territorio aduanero de la Comunidad. Si por razones de organización administrativa o por otras razones debidamente justificadas, no pueden aplicarse las disposiciones del párrafo primero, la declaración de exportación, sólo podrá ser presentada en la oficina de aduanas competente del Estado miembro en cuestión, y, en el caso de un control físico de conformidad con el Reglamento (CEE) n° 386/90, el producto presentado deberá ser descargado completamente. No obstante, la descarga completa no será obligatoria cuando las autoridades competentes puedan garantizar la realización de un control físico exhaustivo.%quot%. 4) En el apartado 3 del artículo 25, el último párrafo se sustituirá por el texto siguiente: %quot%La presente disposición sólo se aplicará cuando el régimen de retorno haya sido utilizado en el Estado miembro donde se haya aceptado la declaración de exportación de la primera exportación o en el Estado miembro de origen, de conformidad con el artículo 15 de la Directiva 97/78/CE del Consejo(5), por la que se establecen los principios relativos a la organización de controles veterinarios de los productos que se introduzcan en la Comunidad procedentes de terceros países.%quot%. Artículo 2 El presente Reglamento entrará en vigor el séptimo día siguiente al de su publicación en el Diario Oficial de las Comunidades Europeas. A petición de los exportadores, las disposiciones del apartado 1 del articulo 1 se aplicarán a los expedientes de restituciones que aún no hayan sido cerrados en el momento de la entrada en vigor del presente Reglamento. El presente Reglamento será obligatorio en todos sus elementos y directamente aplicable en cada Estado miembro. Hecho en Bruselas, el 17 de enero de 2001. Por la Comisión Franz Fischler Miembro de la Comisión (1) DO L 181 de 1.7.1992, p. 21. (2) DO L 193 de 29.7.2000, p. 1. (3) DO L 102 de 17.4.1999, p. 11. (4) DO L 179 de 18.7.2000, p. 6. (5) DO L 24 de 30.1.1998, p. 9."
---
# legal_t5_small_cls_es model
Model for classification of legal text written in Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in Spanish.
### How to use
Here is how to use this model to classify legal text written in Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Reglamento (CE) no 90/2001 de la Comisión de 17 de enero de 2001 que modifica el Reglamento (CE) n° 800/1999 por el que se establecen disposiciones comunes de aplicación del régimen de restituciones por exportación de productos agrícolas LA COMISIÓN DE LAS COMUNIDADES EUROPEAS, Visto el Tratado constitutivo de la Comunidad Europea, Visto el Reglamento (CEE) n° 1766/92 del Consejo, de 30 de junio de 1992, por el que se establece la organización común de mercados en el sector de los cereales(1), cuya última modificación la constituye el Reglamento (CE) n° 1666/2000(2), y, en particular, sus artículos 13 y 21, así como las disposiciones correspondientes de los demás Reglamentos por los que se establecen organizaciones comunes de mercados de productos agrícolas, Considerando lo siguiente: (1) En el caso de exportación de productos presentados a granel o en unidades no normalizadas, en los que es evidente que la masa neta exacta de los productos no puede conocerse hasta después de cargar el medio de transporte, el apartado 6 del artículo 5 del Reglamento (CE) n° 800/1999 de la Comisión(3), modificado por el Reglamento (CE) n° 1557/2000(4) establece la aplicación de una reducción de la restitución cuando la masa neta efectivamente cargada sea inferior a un determinado porcentaje de la masa neta estimada. No obstante, para la aplicación de esta disposición conviene tener en cuenta las limitaciones inherentes a los medios de transporte de navegación marítima o interior. En efecto, en el caso de los productos exportados a granel, puede ocurrir que las cantidades declaradas no se carguen en su totalidad debido, en particular, a la decisión del responsable del medio de transporte que puede ordenar la suspensión de la carga por razones técnicas o debido a un exceso de carga imputable a los demás exportadores. (2) Dado que determinados cortes de carne de porcino no se presentan en embalajes ni son, por naturaleza, homogéneos, conviene ampliar la categoría de unidades no normalizadas a este tipo de productos. (3) En lo que respecta a la noción de lugar de carga, en el comercio de exportación de productos agrícolas se presenta una multitud de situaciones comerciales y administrativas; por consiguiente, es difícil establecer una norma única y conviene autorizar a los Estados miembros para que determinen el lugar más apropiado para efectuar los controles físicos para los productos agrícolas exportados que se benefician de una restitución. A estos efectos, parece justificado determinar el lugar de carga, de forma diferente, en función de que los productos sean cargados en contenedores o, por el contrario, a granel, en sacos o en cajas y no se carguen posteriormente en contenedores. Asimismo, es conveniente que, cuando existan motivos debidamente justificados, se permita que las autoridades aduaneras acepten para los productos agrícolas que se beneficien, de una restitución declaraciones de exportación presentadas en una oficina de aduanas que no sea la del lugar donde vayan a cargarse los productos. (4) En el caso de los productos sujetos al régimen de mercancías de retorno, es oportuno prever la posibilidad de que la reintroducción se efectúe, bien por el Estado miembros del que sean originarios los productos, bien por el Estado miembro exportador de la primera exportación. (5) Conviene modificar el Reglamento (CE) n° 800/1999 en consecuencia. (6) Las medidas previstas en el presente Reglamento se ajustan al dictamen de todos los Comités de gestión interesados. HA ADOPTADO EL PRESENTE REGLAMENTO: Artículo 1 El Reglamento (CE) n° 800/1999 se modificará como sigue: 1) En el apartado 6 del articulo 5, el párrafo tercero se sustituirá por el texto siguiente: %quot%No se concederá ninguna restitución por la cantidad que sobrepase el 110 % de la masa neta estimada. Cuando la masa efectivamente cargada sea inferior al 90 % de la masa neta estimada, la restitución por la masa neta efectivamente cargada se reducirá un 10 % en relación con la diferencia entre la restitución correspondiente al 90 % de la masa neta estimada y la restitución correspondiente a la masa efectivamente cargada. No obstante, en los casos de exportación par vía marítima o por vía navegable interior, la restitución se pagará por la masa neta efectivamente cargada cuando el exportador pueda aportar la prueba, refrendada por el responsable del medio de transporte, de que el hecho de que no se cargara la totalidad de sus mercancías se debió a las limitaciones inherentes a ese tipo de transporte o a un exceso de carga imputable a uno o a varios de los demás exportadores. En caso de que el exportador haya utilizado el procedimiento de domiciliación previsto en el artículo 283 del Reglamento (CEE) n° 2454/93 serán aplicables las disposiciones del presente párrafo siempre que las autoridades aduaneras hayan autorizado la rectificación de los documentos contables en los que los productos exportados hayan sido inscritos.%quot%. 2) En el apartado 6 del artículo 5, el párrafo cuarto se sustituirá por el texto siguiente: %quot%Se considerarán productos en unidades no estandarizadas los animales vivos, las (medias) canales, los cuartos, partes delanteras, jamones, paletillas, pechos y lomos.%quot%. 3) El apartado 7 del articulo 5 se sustituirá por el texto siguiente: %quot%7. Cualquier persona que exporte productos por los cuales solicite la concesión de la restitución estará obligada a lo siguiente: a) presentar la declaración de exportación en la oficina de aduanas competente del lugar en que los productos vayan a cargarse en el transporte que vaya a efectuar la exportación; b) informar a dicha oficina de aduanas, coma mínimo 24 horas antes del comienzo de las operaciones de carga, e indicar la duración prevista de las operaciones de carga; las autoridades competentes podrán modificar el plazo de 24 horas. Se podrá considerar como lugar de carga en el transporte de los productos destinados a la exportación: - en el caso de los productos que se exporten cargados en contenedores, el lugar donde se carguen en éstos las mercancías, - en el caso de los productos que se exporten a granel, en sacos, cajones, cajas, botellas, etc. sin cargarse en contenedores, el lugar donde se cargue el medio de transporte por el que las mercancías vayan a salir del territorio aduanero de la Comunidad. La oficina de aduanas competente podrá autorizar las operaciones de carga una vez aceptada la declaración de exportación y antes de finalizar el plazo a que se refiere la letra b). La oficina de aduanas competente deberá estar en condiciones de realizar el control físico y de aplicar las medidas de identificación necesarias para el transporte hacia la oficina de salida del territorio aduanero de la Comunidad. Si por razones de organización administrativa o por otras razones debidamente justificadas, no pueden aplicarse las disposiciones del párrafo primero, la declaración de exportación, sólo podrá ser presentada en la oficina de aduanas competente del Estado miembro en cuestión, y, en el caso de un control físico de conformidad con el Reglamento (CEE) n° 386/90, el producto presentado deberá ser descargado completamente. No obstante, la descarga completa no será obligatoria cuando las autoridades competentes puedan garantizar la realización de un control físico exhaustivo.%quot%. 4) En el apartado 3 del artículo 25, el último párrafo se sustituirá por el texto siguiente: %quot%La presente disposición sólo se aplicará cuando el régimen de retorno haya sido utilizado en el Estado miembro donde se haya aceptado la declaración de exportación de la primera exportación o en el Estado miembro de origen, de conformidad con el artículo 15 de la Directiva 97/78/CE del Consejo(5), por la que se establecen los principios relativos a la organización de controles veterinarios de los productos que se introduzcan en la Comunidad procedentes de terceros países.%quot%. Artículo 2 El presente Reglamento entrará en vigor el séptimo día siguiente al de su publicación en el Diario Oficial de las Comunidades Europeas. A petición de los exportadores, las disposiciones del apartado 1 del articulo 1 se aplicarán a los expedientes de restituciones que aún no hayan sido cerrados en el momento de la entrada en vigor del presente Reglamento. El presente Reglamento será obligatorio en todos sus elementos y directamente aplicable en cada Estado miembro. Hecho en Bruselas, el 17 de enero de 2001. Por la Comisión Franz Fischler Miembro de la Comisión (1) DO L 181 de 1.7.1992, p. 21. (2) DO L 193 de 29.7.2000, p. 1. (3) DO L 102 de 17.4.1999, p. 11. (4) DO L 179 de 18.7.2000, p. 6. (5) DO L 24 de 30.1.1998, p. 9."
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_cls_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 22 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_es | 0.6318|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_cls_finetuned_de | fa191cd43a58a7a5bda06ac551f4cf2ff23ac630 | 2021-06-23T10:30:26.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_finetuned_de | 2 | null | transformers | 23,387 | Entry not found |
SEBIS/legal_t5_small_cls_finetuned_fr | e9c942147d934104eeae823d5359b80d0e41ac8a | 2021-06-23T10:33:21.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_finetuned_fr | 2 | null | transformers | 23,388 | Entry not found |
SEBIS/legal_t5_small_cls_fr | 041fc53356e6900dd0d19f6e73792ef87e4fe4f2 | 2021-06-23T10:36:03.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French",
"dataset:jrc-acquis",
"transformers",
"classification French model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_fr | 2 | null | transformers | 23,389 |
---
language: French
tags:
- classification French model
datasets:
- jrc-acquis
widget:
- text: "Règlement (CE) no 264/2005 de la Commission du 16 février 2005 fixant les restitutions à l'exportation dans le secteur de la viande de volaille applicables à partir du 17 février 2005 LA COMMISSION DES COMMUNAUTÉS EUROPÉENNES, vu le traité instituant la Communauté européenne, vu le règlement (CEE) no 2777/75 du Conseil du 29 octobre 1975 portant organisation commune des marchés dans le secteur de la viande de volaille [1], et notamment son article 8, paragraphe 3, troisième alinéa, considérant ce qui suit: (1) Aux termes de l'article 8 du règlement (CEE) no 2777/75, la différence entre les prix des produits visés à l'article 1er, paragraphe 1, dudit règlement, sur le marché mondial et dans la Communauté, peut être couverte par une restitution à l'exportation. (2) L'application de ces règles et critères à la situation actuelle des marchés dans le secteur de la viande de volaille conduit à fixer la restitution à un montant qui permette la participation de la Communauté au commerce international et tienne compte également du caractère des exportations de ces produits ainsi que de leur importance à l'heure actuelle. (3) L'article 21 du règlement (CE) no 800/1999 de la Commission du 15 avril 1999 portant modalités communes d'application du régime des restitutions à l'exportation pour les produits agricoles [2] prévoit qu'aucune restitution n'est octroyée lorsque les produits ne sont pas de qualité saine, loyale et marchande le jour d'acceptation de la déclaration d'exportation. Afin d'assurer une application uniforme de la réglementation en vigueur, il y a lieu de préciser que, pour bénéficier d'une restitution, les viandes de volailles figurant à l'article 1er du règlement (CEE) no 2777/75 doivent porter la marque de salubrité comme prévu à la directive 71/118/CEE du Conseil du 15 février 1971 relative à des problèmes sanitaires en matière de production et de mise sur le marché de viandes fraîches de volaille [3]. (4) Le comité de gestion de la viande de volaille et des œufs n'a pas émis d'avis dans le délai imparti par son président, A ARRÊTÉ LE PRÉSENT RÈGLEMENT: Article premier Les codes des produits pour l'exportation desquels est accordée la restitution visée à l'article 8 du règlement (CEE) no 2777/75 et les montants de cette restitution sont fixés à l'annexe du présent règlement. Toutefois, afin de pouvoir bénéficier de la restitution, les produits entrant dans le champ d'application du chapitre XII de l'annexe de la directive 71/118/CEE doivent également satisfaire aux conditions de marquage de salubrité prévues par cette directive. Article 2 Le présent règlement entre en vigueur le 17 février 2005. Le présent règlement est obligatoire dans tous ses éléments et directement applicable dans tout État membre. Fait à Bruxelles, le 16 février 2005. Par la Commission Mariann Fischer Boel Membre de la Commission [1] JO L 282 du 1.11.1975, p. 77. Règlement modifié en dernier lieu par le règlement (CE) no 806/2003 (JO L 122 du 16.5.2003, p. 1). [2] JO L 102 du 17.4.1999, p. 11. Règlement modifié en dernier lieu par le règlement (CE) no 671/2004 (JO L 105 du 14.4.2004, p. 5). [3] JO L 55 du 8.3.1971, p. 23. Directive modifiée en dernier lieu par le règlement (CE) no 807/2003 (JO L 122 du 16.5.2003, p. 36). -------------------------------------------------- ANNEXE Code des produits | Destination | Unité de mesure | Montant des restitutions | 0105 11 11 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 19 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 91 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 99 9000 | A02 | EUR/100 pcs | 0,80 | 0105 12 00 9000 | A02 | EUR/100 pcs | 1,70 | 0105 19 20 9000 | A02 | EUR/100 pcs | 1,70 | 0207 12 10 9900 | V01 | EUR/100 kg | 41,00 | 0207 12 10 9900 | A24 | EUR/100 kg | 41,00 | 0207 12 90 9190 | V01 | EUR/100 kg | 41,00 | 0207 12 90 9190 | A24 | EUR/100 kg | 41,00 | 0207 12 90 9990 | V01 | EUR/100 kg | 41,00 | 0207 12 90 9990 | A24 | EUR/100 kg | 41,00 | --------------------------------------------------"
---
# legal_t5_small_cls_fr model
Model for classification of legal text written in French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in French.
### How to use
Here is how to use this model to classify legal text written in French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "Règlement (CE) no 264/2005 de la Commission du 16 février 2005 fixant les restitutions à l'exportation dans le secteur de la viande de volaille applicables à partir du 17 février 2005 LA COMMISSION DES COMMUNAUTÉS EUROPÉENNES, vu le traité instituant la Communauté européenne, vu le règlement (CEE) no 2777/75 du Conseil du 29 octobre 1975 portant organisation commune des marchés dans le secteur de la viande de volaille [1], et notamment son article 8, paragraphe 3, troisième alinéa, considérant ce qui suit: (1) Aux termes de l'article 8 du règlement (CEE) no 2777/75, la différence entre les prix des produits visés à l'article 1er, paragraphe 1, dudit règlement, sur le marché mondial et dans la Communauté, peut être couverte par une restitution à l'exportation. (2) L'application de ces règles et critères à la situation actuelle des marchés dans le secteur de la viande de volaille conduit à fixer la restitution à un montant qui permette la participation de la Communauté au commerce international et tienne compte également du caractère des exportations de ces produits ainsi que de leur importance à l'heure actuelle. (3) L'article 21 du règlement (CE) no 800/1999 de la Commission du 15 avril 1999 portant modalités communes d'application du régime des restitutions à l'exportation pour les produits agricoles [2] prévoit qu'aucune restitution n'est octroyée lorsque les produits ne sont pas de qualité saine, loyale et marchande le jour d'acceptation de la déclaration d'exportation. Afin d'assurer une application uniforme de la réglementation en vigueur, il y a lieu de préciser que, pour bénéficier d'une restitution, les viandes de volailles figurant à l'article 1er du règlement (CEE) no 2777/75 doivent porter la marque de salubrité comme prévu à la directive 71/118/CEE du Conseil du 15 février 1971 relative à des problèmes sanitaires en matière de production et de mise sur le marché de viandes fraîches de volaille [3]. (4) Le comité de gestion de la viande de volaille et des œufs n'a pas émis d'avis dans le délai imparti par son président, A ARRÊTÉ LE PRÉSENT RÈGLEMENT: Article premier Les codes des produits pour l'exportation desquels est accordée la restitution visée à l'article 8 du règlement (CEE) no 2777/75 et les montants de cette restitution sont fixés à l'annexe du présent règlement. Toutefois, afin de pouvoir bénéficier de la restitution, les produits entrant dans le champ d'application du chapitre XII de l'annexe de la directive 71/118/CEE doivent également satisfaire aux conditions de marquage de salubrité prévues par cette directive. Article 2 Le présent règlement entre en vigueur le 17 février 2005. Le présent règlement est obligatoire dans tous ses éléments et directement applicable dans tout État membre. Fait à Bruxelles, le 16 février 2005. Par la Commission Mariann Fischer Boel Membre de la Commission [1] JO L 282 du 1.11.1975, p. 77. Règlement modifié en dernier lieu par le règlement (CE) no 806/2003 (JO L 122 du 16.5.2003, p. 1). [2] JO L 102 du 17.4.1999, p. 11. Règlement modifié en dernier lieu par le règlement (CE) no 671/2004 (JO L 105 du 14.4.2004, p. 5). [3] JO L 55 du 8.3.1971, p. 23. Directive modifiée en dernier lieu par le règlement (CE) no 807/2003 (JO L 122 du 16.5.2003, p. 36). -------------------------------------------------- ANNEXE Code des produits | Destination | Unité de mesure | Montant des restitutions | 0105 11 11 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 19 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 91 9000 | A02 | EUR/100 pcs | 0,80 | 0105 11 99 9000 | A02 | EUR/100 pcs | 0,80 | 0105 12 00 9000 | A02 | EUR/100 pcs | 1,70 | 0105 19 20 9000 | A02 | EUR/100 pcs | 1,70 | 0207 12 10 9900 | V01 | EUR/100 kg | 41,00 | 0207 12 10 9900 | A24 | EUR/100 kg | 41,00 | 0207 12 90 9190 | V01 | EUR/100 kg | 41,00 | 0207 12 90 9190 | A24 | EUR/100 kg | 41,00 | 0207 12 90 9990 | V01 | EUR/100 kg | 41,00 | 0207 12 90 9990 | A24 | EUR/100 kg | 41,00 | --------------------------------------------------"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_cls_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 22 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_fr | 0.6159|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_cls_multitask_cs | 4b472d2680fa06056684363bd6f2130f3a3a0b17 | 2021-06-23T10:37:16.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_multitask_cs | 2 | null | transformers | 23,390 | Entry not found |
SEBIS/legal_t5_small_cls_multitask_de | ed24e7820c45cd4bf85fe0efeab2300136cea3d4 | 2021-06-23T10:38:26.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_multitask_de | 2 | null | transformers | 23,391 | Entry not found |
SEBIS/legal_t5_small_cls_multitask_en | 074212c0b64f9b0a56266d9903fb06d5faf44348 | 2021-06-23T10:39:08.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_multitask_en | 2 | null | transformers | 23,392 | Entry not found |
SEBIS/legal_t5_small_cls_multitask_es | 284ea8e13c631c6a8f63140ba79aba29696a2899 | 2021-06-23T10:42:26.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_multitask_es | 2 | null | transformers | 23,393 | Entry not found |
SEBIS/legal_t5_small_cls_multitask_it | 11745412c456d09c30c853ddd1a8abf8f1c8794e | 2021-06-23T10:43:53.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_multitask_it | 2 | null | transformers | 23,394 | Entry not found |
SEBIS/legal_t5_small_cls_multitask_sv | 5bbc3eaf3308b7f22bd4f7b184a07dfb8e87726f | 2021-06-23T10:45:02.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_multitask_sv | 2 | null | transformers | 23,395 | Entry not found |
SEBIS/legal_t5_small_cls_sv | c597dc3fd47e910e4329047b8373d434a3aaa28b | 2021-06-23T10:45:44.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish",
"dataset:jrc-acquis",
"transformers",
"classification Swedish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_sv | 2 | null | transformers | 23,396 |
---
language: Swedish
tags:
- classification Swedish model
datasets:
- jrc-acquis
widget:
- text: "Rådets förordning (EG) nr 1973/2002 av den 5 november 2002 om ändring av förordning (EG) nr 2026/97 om skydd mot subventionerad import från länder som inte är medlemmar i Europeiska gemenskapen EUROPEISKA UNIONENS RÅD HAR ANTAGIT DENNA FÖRORDNING med beaktande av Fördraget om upprättandet av Europeiska gemenskapen, särskilt artikel 133 i detta, med beaktande av kommissionens förslag, och av följande skäl: (1) Rådet antog genom förordning (EG) nr 2026/97(1) gemensamma regler för skydd mot subventionerad import från länder som inte är medlemmar i Europeiska gemenskapen. (2) I artikel 6 i förordning (EG) nr 2026/97 anges vissa riktlinjer för beräkning av förmånen för mottagaren, inbegripet det riktmärke för marknaden enligt vilket förmånens storlek beräknas. Det bör klargöras vilka bestämmelser som bör följas i de fall ett sådant riktmärke för marknaden inte finns i det berörda landet. I en sådan situation bör riktmärket fastställas genom anpassning av de villkor som råder i det berörda landet på grundval av de faktiska uppgifter som är tillgängliga där. Om detta inte är praktiskt genomförbart på grund av att det inte finns några uppgifter om sådana priser och kostnader eller på grund av att dessa är otillförlitliga, bör riktmärket fastställas med hjälp av de villkor som gäller på andra marknader. (3) I artikel 4 i förordning (EG) nr 2026/97 anges att vissa subventioner som rör miljö, forskning och regional utveckling inte är utjämningsbara. I artikel 10.5 och 10.6 i den förordningen anges vidare att undersökningar kan inledas för att avgöra om subventioner är icke-utjämningsbara och att de inte bör inledas om de rör vissa icke-utjämningsbara subventioner. Motsvarande bestämmelser i WTO-avtalet beträffande subventioner och utjämningsåtgärder var avsedda att löpa ut den 31 december 1999, såvida inte WTO-medlemsstaterna beslutade annat. Inget sådant beslut har fattats och de relevanta bestämmelserna är därför inte längre tillämpliga. Det är därför nödvändigt att fastställa huruvida bestämmelserna rörande icke-utjämningsbara subventioner i förordning (EG) nr 2026/97 bör fortsätta att gälla. Gemenskapens viktigaste handelspartner tillämpar inte längre dessa bestämmelser i sina utjämningsundersökningar. Av denna anledning och i syfte att upprätthålla balansen mellan rättigheter och skyldigheter enligt nämnda WTO-avtal bör de bestämmelser i förordning (EG) nr 2026/97 som rör icke-utjämningsbara subventioner upphöra att gälla. (4) I artikel 28.5 i förordning (EG) nr 2026/97 anges att om tillgängliga uppgifter används skall upplysningarna kontrolleras genom att jämföras med uppgifter från flera källor. Det bör specificeras att dessa källor också kan utgöras av uppgifter om världsmarknaden eller andra representativa marknader. (5) Ur rättssäkerhetssynpunkt är det lämpligt att dessa ändringar tillämpas så snart som möjligt i samband med alla nya undersökningar. HÄRIGENOM FÖRESKRIVS FÖLJANDE. Artikel 1 Förordning (EG) nr 2026/97 ändras enligt följande: 1. I artikel 6 d skall följande text läggas till: %quot%Om det inte finns några sådana rådande marknadsvillkor för produkterna eller tjänsterna i fråga i det land som tillhandahåller eller köper dem, som kan användas som lämpliga riktmärken, skall en av följande bestämmelser tillämpas: i) De villkor som råder i landet i fråga skall justeras på grundval av de faktiska kostnader, priser och andra faktorer som är tillgängliga i det landet med hjälp av ett lämpligt belopp som avspeglar normala marknadsvillkor. ii) I tillämpliga fall skall de villkor användas som råder på marknaden i ett annat land eller på världsmarknaden och som är tillgängliga för mottagaren.%quot% 2. Artikel 4 och artikel 10.5 och 10.6 skall utgå. 3. I artikel 28.5 skall följande mening läggas till: %quot%Sådana uppgifter kan, i tillämpliga fall, inbegripa relevanta upplysningar om världsmarknaden eller andra representativa marknader.%quot% Artikel 2 Denna förordning träder i kraft dagen efter det att den har offentliggjorts i Europeiska gemenskapernas officiella tidning. Den skall tillämpas i samband med alla undersökningar som inleds i enlighet med förordning (EG) nr 2026/97 efter dagen för ikraftträdandet av denna förordning. Denna förordning är till alla delar bindande och direkt tillämplig i alla medlemsstater. Utfärdad i Bryssel den 5 november 2002. På rådets vägnar T. Pedersen Ordförande (1) EGT L 288, 21.10.1997, s. 1."
---
# legal_t5_small_cls_sv model
Model for classification of legal text written in Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in Swedish.
### How to use
Here is how to use this model to classify legal text written in Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Rådets förordning (EG) nr 1973/2002 av den 5 november 2002 om ändring av förordning (EG) nr 2026/97 om skydd mot subventionerad import från länder som inte är medlemmar i Europeiska gemenskapen EUROPEISKA UNIONENS RÅD HAR ANTAGIT DENNA FÖRORDNING med beaktande av Fördraget om upprättandet av Europeiska gemenskapen, särskilt artikel 133 i detta, med beaktande av kommissionens förslag, och av följande skäl: (1) Rådet antog genom förordning (EG) nr 2026/97(1) gemensamma regler för skydd mot subventionerad import från länder som inte är medlemmar i Europeiska gemenskapen. (2) I artikel 6 i förordning (EG) nr 2026/97 anges vissa riktlinjer för beräkning av förmånen för mottagaren, inbegripet det riktmärke för marknaden enligt vilket förmånens storlek beräknas. Det bör klargöras vilka bestämmelser som bör följas i de fall ett sådant riktmärke för marknaden inte finns i det berörda landet. I en sådan situation bör riktmärket fastställas genom anpassning av de villkor som råder i det berörda landet på grundval av de faktiska uppgifter som är tillgängliga där. Om detta inte är praktiskt genomförbart på grund av att det inte finns några uppgifter om sådana priser och kostnader eller på grund av att dessa är otillförlitliga, bör riktmärket fastställas med hjälp av de villkor som gäller på andra marknader. (3) I artikel 4 i förordning (EG) nr 2026/97 anges att vissa subventioner som rör miljö, forskning och regional utveckling inte är utjämningsbara. I artikel 10.5 och 10.6 i den förordningen anges vidare att undersökningar kan inledas för att avgöra om subventioner är icke-utjämningsbara och att de inte bör inledas om de rör vissa icke-utjämningsbara subventioner. Motsvarande bestämmelser i WTO-avtalet beträffande subventioner och utjämningsåtgärder var avsedda att löpa ut den 31 december 1999, såvida inte WTO-medlemsstaterna beslutade annat. Inget sådant beslut har fattats och de relevanta bestämmelserna är därför inte längre tillämpliga. Det är därför nödvändigt att fastställa huruvida bestämmelserna rörande icke-utjämningsbara subventioner i förordning (EG) nr 2026/97 bör fortsätta att gälla. Gemenskapens viktigaste handelspartner tillämpar inte längre dessa bestämmelser i sina utjämningsundersökningar. Av denna anledning och i syfte att upprätthålla balansen mellan rättigheter och skyldigheter enligt nämnda WTO-avtal bör de bestämmelser i förordning (EG) nr 2026/97 som rör icke-utjämningsbara subventioner upphöra att gälla. (4) I artikel 28.5 i förordning (EG) nr 2026/97 anges att om tillgängliga uppgifter används skall upplysningarna kontrolleras genom att jämföras med uppgifter från flera källor. Det bör specificeras att dessa källor också kan utgöras av uppgifter om världsmarknaden eller andra representativa marknader. (5) Ur rättssäkerhetssynpunkt är det lämpligt att dessa ändringar tillämpas så snart som möjligt i samband med alla nya undersökningar. HÄRIGENOM FÖRESKRIVS FÖLJANDE. Artikel 1 Förordning (EG) nr 2026/97 ändras enligt följande: 1. I artikel 6 d skall följande text läggas till: %quot%Om det inte finns några sådana rådande marknadsvillkor för produkterna eller tjänsterna i fråga i det land som tillhandahåller eller köper dem, som kan användas som lämpliga riktmärken, skall en av följande bestämmelser tillämpas: i) De villkor som råder i landet i fråga skall justeras på grundval av de faktiska kostnader, priser och andra faktorer som är tillgängliga i det landet med hjälp av ett lämpligt belopp som avspeglar normala marknadsvillkor. ii) I tillämpliga fall skall de villkor användas som råder på marknaden i ett annat land eller på världsmarknaden och som är tillgängliga för mottagaren.%quot% 2. Artikel 4 och artikel 10.5 och 10.6 skall utgå. 3. I artikel 28.5 skall följande mening läggas till: %quot%Sådana uppgifter kan, i tillämpliga fall, inbegripa relevanta upplysningar om världsmarknaden eller andra representativa marknader.%quot% Artikel 2 Denna förordning träder i kraft dagen efter det att den har offentliggjorts i Europeiska gemenskapernas officiella tidning. Den skall tillämpas i samband med alla undersökningar som inleds i enlighet med förordning (EG) nr 2026/97 efter dagen för ikraftträdandet av denna förordning. Denna förordning är till alla delar bindande och direkt tillämplig i alla medlemsstater. Utfärdad i Bryssel den 5 november 2002. På rådets vägnar T. Pedersen Ordförande (1) EGT L 288, 21.10.1997, s. 1."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_cls_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_sv | 0.6449|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_finetuned_summ_de | d9d5c0acf30379350ef147e8d3cb881c1e6785f3 | 2021-06-23T10:47:04.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_finetuned_summ_de | 2 | null | transformers | 23,397 | Entry not found |
SEBIS/legal_t5_small_finetuned_summ_es | a2d1cbf6a0d6c102135f2998ccad8c3d40836803 | 2021-06-23T10:48:12.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_finetuned_summ_es | 2 | null | transformers | 23,398 | Entry not found |
SEBIS/legal_t5_small_finetuned_summ_sv | 868b76331686d1a1f9db1bd20b53d91eb56dae62 | 2021-06-23T10:50:11.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_finetuned_summ_sv | 2 | null | transformers | 23,399 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.