modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SEBIS/code_trans_t5_small_source_code_summarization_python_transfer_learning_finetune | e32f9bcbdb19d8e2149fba183b38f5b7c9462ae0 | 2021-06-23T10:23:41.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_source_code_summarization_python_transfer_learning_finetune | 3 | null | transformers | 20,900 | ---
tags:
- summarization
widget:
- text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
---
# CodeTrans model for source code summarization python
Pretrained model on programming language python using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the python code snippets.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/python/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/legal_t5_small_cls_de | 1ed62966ab5a9f5e2d202577314af33e58bf931b | 2021-06-23T10:27:59.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch",
"dataset:jrc-acquis",
"transformers",
"classification Deustch model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_de | 3 | null | transformers | 20,901 |
---
language: Deustch
tags:
- classification Deustch model
datasets:
- jrc-acquis
widget:
- text: "BESCHLUSS DES RATES vom 17. Dezember 1999 über den Abschluß des Abkommens in Form eines Briefwechsels zwischen der Europäischen Gemeinschaft und der Tunesischen Republik über die Regelung für die Einfuhr von nicht behandeltem Olivenöl mit Ursprung in Tunesien in die Gemeinschaft (1999/873/EG) DER RAT DER EUROPÄISCHEN UNION - gestützt auf den Vertrag zur Gründung der Europäischen Gemeinschaft, insbesondere auf Artikel 133 in Verbindung mit Artikel 300 Absatz 2 Unterabsatz 1, auf Vorschlag der Kommission, in Erwägung nachstehender Gründe: (1) Zwischen der Europäischen Gemeinschaft und der Tunesischen Republik wurde ein Abkommen in Form eines Briefwechsels ausgehandelt, um die Geltungsdauer der Regelung für die Einfuhr von nicht behandeltem Olivenöl mit Ursprung in Tunesien in die Gemeinschaft, die in Artikel 3 des Protokolls Nr. 1 des Europa-Mittelmeer-Abkommens zur Gründung einer Assoziation zwischen der Europäischen Gemeinschaft und ihren Mitgliedstaaten einerseits und der Tunesischen Republik andererseits(1) vorgesehen ist, für die Zeit vom 1. Januar bis zum 31. Dezember 2000 zu verlängern. (2) Das Abkommen sollte im Namen der Gemeinschaft genehmigt werden - BESCHLIESST: Artikel 1 Das Abkommen in Form eines Briefwechsels zwischen der Europäischen Gemeinschaft und der Tunesischen Republik über die Regelung für die Einfuhr von nicht behandeltem Olivenöl mit Ursprung in Tunesien in die Gemeinschaft wird im Namen der Gemeinschaft genehmigt. Der Wortlaut des Abkommens ist diesem Beschluß beigefügt. Artikel 2 Der Präsident des Rates wird ermächtigt, die Person zu bestellen, die befugt ist, das Abkommen rechtsverbindlich für die Gemeinschaft zu unterzeichnen. Geschehen zu Brüssel am 17. Dezember 1999. Im Namen des Rates Der Präsident K. HEMILÄ (1) ABl. L 97 vom 30.3.1998, S. 1."
---
# legal_t5_small_cls_de model
Model for classification of legal text written in Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in Deustch.
### How to use
Here is how to use this model to classify legal text written in Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "BESCHLUSS DES RATES vom 17. Dezember 1999 über den Abschluß des Abkommens in Form eines Briefwechsels zwischen der Europäischen Gemeinschaft und der Tunesischen Republik über die Regelung für die Einfuhr von nicht behandeltem Olivenöl mit Ursprung in Tunesien in die Gemeinschaft (1999/873/EG) DER RAT DER EUROPÄISCHEN UNION - gestützt auf den Vertrag zur Gründung der Europäischen Gemeinschaft, insbesondere auf Artikel 133 in Verbindung mit Artikel 300 Absatz 2 Unterabsatz 1, auf Vorschlag der Kommission, in Erwägung nachstehender Gründe: (1) Zwischen der Europäischen Gemeinschaft und der Tunesischen Republik wurde ein Abkommen in Form eines Briefwechsels ausgehandelt, um die Geltungsdauer der Regelung für die Einfuhr von nicht behandeltem Olivenöl mit Ursprung in Tunesien in die Gemeinschaft, die in Artikel 3 des Protokolls Nr. 1 des Europa-Mittelmeer-Abkommens zur Gründung einer Assoziation zwischen der Europäischen Gemeinschaft und ihren Mitgliedstaaten einerseits und der Tunesischen Republik andererseits(1) vorgesehen ist, für die Zeit vom 1. Januar bis zum 31. Dezember 2000 zu verlängern. (2) Das Abkommen sollte im Namen der Gemeinschaft genehmigt werden - BESCHLIESST: Artikel 1 Das Abkommen in Form eines Briefwechsels zwischen der Europäischen Gemeinschaft und der Tunesischen Republik über die Regelung für die Einfuhr von nicht behandeltem Olivenöl mit Ursprung in Tunesien in die Gemeinschaft wird im Namen der Gemeinschaft genehmigt. Der Wortlaut des Abkommens ist diesem Beschluß beigefügt. Artikel 2 Der Präsident des Rates wird ermächtigt, die Person zu bestellen, die befugt ist, das Abkommen rechtsverbindlich für die Gemeinschaft zu unterzeichnen. Geschehen zu Brüssel am 17. Dezember 1999. Im Namen des Rates Der Präsident K. HEMILÄ (1) ABl. L 97 vom 30.3.1998, S. 1."
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_cls_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_de | 0.6358|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_cs_en | 5862e5d39b43934a3dd1ec45ea1d9fbfb067e390 | 2021-06-23T10:51:17.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Cszech English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech English model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_cs_en | 3 | null | transformers | 20,902 |
---
language: Cszech English
tags:
- translation Cszech English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Komise musí vypracovat zprávu o hodnotících zprávách týkajících se uplatňování této směrnice v členských státech."
---
# legal_t5_small_multitask_cs_en model
Model on translating legal text from Cszech to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_cs_en model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to English.
### How to use
Here is how to use this model to translate legal text from Cszech to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Komise musí vypracovat zprávu o hodnotících zprávách týkajících se uplatňování této směrnice v členských státech."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_cs_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_cs_en | 37.136|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_de_sv | 3c4ecb1d9bdfc9d292837b90cd8042c2af92da58 | 2021-06-23T10:56:56.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Swedish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_de_sv | 3 | null | transformers | 20,903 |
---
language: Deustch Swedish
tags:
- translation Deustch Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "SCHRIFTLICHE ANFRAGE P-1584/03"
---
# legal_t5_small_multitask_de_sv model
Model on translating legal text from Deustch to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_de_sv model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Swedish.
### How to use
Here is how to use this model to translate legal text from Deustch to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_de_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_de_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "SCHRIFTLICHE ANFRAGE P-1584/03"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_de_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_de_sv | 35.945|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_fr_de | 788fde3bfde790c0a5f3869a45e99da440063c3c | 2021-06-23T11:09:30.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_fr_de | 3 | null | transformers | 20,904 | Entry not found |
SEBIS/legal_t5_small_multitask_fr_sv | 85f84277d5a941b720f65405e612c8b1893059fa | 2021-06-23T11:12:04.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Swedish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_fr_sv | 3 | null | transformers | 20,905 |
---
language: French Swedish
tags:
- translation French Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "**I Procédure de coopération (première lecture)"
---
# legal_t5_small_multitask_fr_sv model
Model on translating legal text from French to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_fr_sv model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Swedish.
### How to use
Here is how to use this model to translate legal text from French to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_fr_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_fr_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "**I Procédure de coopération (première lecture)"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_fr_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_fr_sv | 39.947|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_sv_en | d2e7048fad0dd360ef45b7da347878504dec4fa3 | 2021-06-23T11:18:13.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish English model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_sv_en | 3 | null | transformers | 20,906 |
---
language: Swedish English
tags:
- translation Swedish English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "inlämnat av följande ledamöter:"
---
# legal_t5_small_multitask_sv_en model
Model on translating legal text from Swedish to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_sv_en model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to English.
### How to use
Here is how to use this model to translate legal text from Swedish to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "inlämnat av följande ledamöter:"
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_sv_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_sv_en | 36.195|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_summ_cs | a89f83bc050971dd1bcb65b17731a92a7ece5d1c | 2021-06-23T11:20:42.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Cszech",
"dataset:jrc-acquis",
"transformers",
"summarization Cszech model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_summ_cs | 3 | null | transformers | 20,907 |
---
language: Cszech
tags:
- summarization Cszech model
datasets:
- jrc-acquis
widget:
- text: "(2006/C 67/15) (Text s významem pro EHP) Dne 10. března 2006 se Komise rozhodla nevznést námitky proti výše uvedenému spojení a prohlásit ho za slučitelné se společným trhem. Toto rozhodnutí je založeno na čl. 6 odst. 1 písm. b) nařízení Rady (ES) č. 139/2004. Celý text rozhodnutí je přístupný pouze v angličtině a bude uveřejněn poté, co bude zbaven obchodního tajemství, které může případně obsahovat. Text bude dosažitelný: - na webové stránce Europa – hospodářská soutěž (http://europa.eu.int/comm/competition/mergers/cases/). Tato webová stránka umožňuje vyhledat jednotlivá rozhodnutí o spojení, a to včetně společnosti, čísla případu, data a indexu odvětví hospodářství. - v elektronické podobě na webové stránce EUR-Lex, pod dokumentem č. 32006M4093. EUR-Lex umožňuje přístup k Evropskému právu přes Internet. (http://europa.eu.int/eur-lex/lex) -------------------------------------------------- "
---
# legal_t5_small_summ_cs model
Model for Summarization of legal text written in Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_summ_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for summarization of legal texts written in Cszech.
### How to use
Here is how to use this model to summarize legal text written in Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "(2006/C 67/15) (Text s významem pro EHP) Dne 10. března 2006 se Komise rozhodla nevznést námitky proti výše uvedenému spojení a prohlásit ho za slučitelné se společným trhem. Toto rozhodnutí je založeno na čl. 6 odst. 1 písm. b) nařízení Rady (ES) č. 139/2004. Celý text rozhodnutí je přístupný pouze v angličtině a bude uveřejněn poté, co bude zbaven obchodního tajemství, které může případně obsahovat. Text bude dosažitelný: - na webové stránce Europa – hospodářská soutěž (http://europa.eu.int/comm/competition/mergers/cases/). Tato webová stránka umožňuje vyhledat jednotlivá rozhodnutí o spojení, a to včetně společnosti, čísla případu, data a indexu odvětví hospodářství. - v elektronické podobě na webové stránce EUR-Lex, pod dokumentem č. 32006M4093. EUR-Lex umožňuje přístup k Evropskému právu přes Internet. (http://europa.eu.int/eur-lex/lex) -------------------------------------------------- "
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_summ_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 18 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | Rouge1 | Rouge2 | Rouge Lsum |
|:-----:|:-----:|:-----:|:-----:|
| legal_t5_small_summ_cs | 75.86|65.82 |74.95|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_summ_fr | 3b192ad3344dc58279b327c1302b31eed9bf4236 | 2021-06-23T11:23:07.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French",
"dataset:jrc-acquis",
"transformers",
"summarization French model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_summ_fr | 3 | null | transformers | 20,908 |
---
language: French
tags:
- summarization French model
datasets:
- jrc-acquis
widget:
- text: "LA COMMISSION DES COMMUNAUTÉS EUROPÉENNES, vu le traité instituant la Communauté européenne, vu le règlement (CE) no 1784/2003 du Conseil du 29 septembre 2003 portant organisation commune des marchés dans le secteur des céréales [1], et notamment son article 13, paragraphe 3, vu le règlement (CE) no 1785/2003 du Conseil du 29 septembre 2003 portant organisation commune du marché du riz [2], et notamment son article 14, paragraphe 3, considérant ce qui suit: (1) Conformément à l'article 13, paragraphe 1, du règlement (CE) no 1784/2003 et à l'article 14, paragraphe 1, du règlement (CE) no 1785/2003, la différence entre les cours ou les prix sur le marché mondial des produits visés à l'article 1er de chacun de ces deux règlements et les prix dans la Communauté peut être couverte par une restitution à l'exportation. (2) Le règlement (CE) no 1043/2005 de la Commission du 30 juin 2005 portant application du règlement (CE) no 3448/93 du Conseil en ce qui concerne le système d’octroi des restitutions à l'exportation pour certains produits agricoles exportés sous forme de marchandises ne relevant pas de l'annexe I du traité ainsi que les critères de fixation de leurs montants [3] a spécifié ceux de ces produits pour lesquels il y a lieu de fixer un taux de restitution applicable lors de leur exportation sous forme de marchandises reprises, selon le cas, à l'annexe III du règlement (CE) no 1784/2003 ou à l'annexe IV du règlement (CE) no 1785/2003. (3) Conformément à l'article 14, paragraphe 1, du règlement (CE) no 1043/2005, le taux de la restitution par 100 kilogrammes de chacun des produits de base considérés doit être fixé chaque mois. (4) Les engagements pris en matière de restitutions pouvant être octroyées à l'exportation de produits agricoles incorporés dans des marchandises ne relevant pas de l'annexe I du traité peuvent être mis en péril par la fixation à l'avance de taux de restitution élevés. Il convient, dès lors, de prendre des mesures de sauvegarde dans ces situations sans empêcher pour autant la conclusion de contrats à long terme. La fixation d'un taux de restitution spécifique pour la fixation à l'avance des restitutions est une mesure permettant de rencontrer ces différents objectifs. (5) À la suite de l'arrangement entre la Communauté européenne et les États-Unis d'Amérique concernant les exportations de pâtes alimentaires de la Communauté aux États-Unis approuvé par la décision 87/482/CEE du Conseil [4], il est nécessaire de différencier la restitution pour les marchandises relevant des codes NC 19021100 et 190219 selon leur destination. (6) Conformément à l'article 15, paragraphes 2 et 3, du règlement (CE) no 1043/2005, il y a lieu de fixer un taux de restitution à l'exportation réduit, compte tenu du montant de la restitution à la production applicable, en vertu du règlement (CEE) no 1722/93 de la Commission [5], au produit de base mis en œuvre, valable au cours de la période présumée de fabrication des marchandises. (7) Les boissons spiritueuses sont considérées comme moins sensibles au prix des céréales mises en œuvre pour leur fabrication. Toutefois, le protocole 19 du traité d'adhésion du Royaume-Uni, de l'Irlande et du Danemark prévoit que des mesures nécessaires doivent être arrêtées afin de faciliter l'utilisation des céréales communautaires pour la fabrication de boissons spiritueuses obtenues à partir de céréales. Il convient donc d'adapter le taux de restitution applicable aux céréales exportées sous forme de boissons spiritueuses. (8) Le comité de gestion des céréales n'a pas émis d'avis dans le délai imparti par son président, A ARRÊTÉ LE PRÉSENT RÈGLEMENT: Article premier Les taux des restitutions applicables aux produits de base figurant à l'annexe I du règlement (CE) no 1043/2005 et à l'article 1er du règlement (CE) no 1784/2003 ou à l'article 1er du règlement (CE) no 1785/2003 modifié, qui sont exportés sous forme de marchandises reprises respectivement à l'annexe III du règlement (CE) no 1784/2003 ou à l'annexe IV du règlement (CE) no 1785/2003, sont fixés comme indiqué à l'annexe du présent règlement. Article 2 Le présent règlement entre en vigueur le 23 septembre 2005. Le présent règlement est obligatoire dans tous ses éléments et directement applicable dans tout État membre. Fait à Bruxelles, le 22 septembre 2005. Par la Commission Günter Verheugen Vice-président [1] JO L 270 du 21.10.2003, p. 78. [2] JO L 270 du 21.10.2003, p. 96. [3] JO L 172 du 5.7.2005, p. 24. [4] JO L 275 du 29.9.1987, p. 36. [5] JO L 159 du 1.7.1993, p. 112. Règlement modifié en dernier lieu par le règlement (CE) no 1584/2004 (JO L 280 du 31.8.2004, p. 11). -------------------------------------------------- ANNEXE Taux des restitutions applicables à compter du 23 septembre 2005 à certains produits des secteurs des céréales et du riz exportés sous forme de marchandises ne relevant pas de l'annexe I du traité [1] (en EUR/100 kg) | Code NC | Désignation des marchandises | Taux de la restitution par 100 kg du produit de base | En cas de fixation à l'avance des restitutions | Autres | 10011000 | Froment (blé) dur: | | | – en cas d'exportation de marchandises relevant des codes NC 190211 et 190219 vers les États-Unis d'Amérique | — | — | – dans les autres cas | — | — | 10019099 | Froment (blé) tendre et méteil: | | | – en cas d'exportation de marchandises relevant des codes NC 190211 et 190219 vers les États-Unis d'Amérique | — | — | – dans les autres cas: | | | – – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | — | — | – – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | — | — | – – dans les autres cas | — | — | 10020000 | Seigle | — | — | 10030090 | Orge | | | – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | — | — | – dans les autres cas | — | — | 10040000 | Avoine | — | — | 10059000 | Maïs, mis en œuvre sous forme de: | | | – amidon: | | | – – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | 2,994 | 3,150 | – – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | – – dans les autres cas | 4,615 | 4,615 | – glucose, sirop de glucose, maltodextrine, sirop de maltodextrine des codes NC 17023051, 17023059, 17023091, 17023099, 17024090, 17029050, 17029075, 17029079, 21069055: | | | – – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | 1,840 | 1,996 | – – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 1,776 | 1,776 | – – dans les autres cas | 3,461 | 3,461 | – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | – autres (y compris en l'état) | 4,615 | 4,615 | Fécule de pommes de terre du code NC 11081300 assimilée à un produit issu de la transformation du maïs: | | | – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | 2,435 | 2,585 | – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | – dans les autres cas | 4,615 | 4,615 | ex100630 | Riz blanchi: | | | – à grains ronds | — | — | – à grains moyens | — | — | – à grains longs | — | — | 10064000 | Riz en brisures | — | — | 10070090 | Sorgho à grains (à l'excl. du sorgho à grains, hybride, destiné à l'ensemencement) | — | — | [1] Les taux prévus à la présente annexe ne s’appliquent pas avec effet au 1er octobre 2004 aux exportations vers la Bulgarie et avec effet au 1er février 2005 aux marchandises visées aux tableaux I et II du Protocole no 2 de l’Accord entre la Communauté économique européenne et la Confédération suisse du 22 juillet 1972 qui sont exportées vers la Confédération suisse ou la principauté de Liechtenstein. [2] En ce qui concerne les produits agricoles obtenus par transformation d’un produit de base et/ou de produits assimilés, les coefficients fixés à l’annexe V du règlement (CE) no 1043/2005 de la Commission s’appliquent. [3] La marchandise concernée relève du code NC 35051050. [4] Marchandises reprises à l'annexe III du règlement (CE) no 1784/2003 ou visées à l'article 2 du règlement (CEE) no 2825/93 (JO L 258 du 16.10.1993, p. 6). [5] Pour les sirops des codes NC 17023099, 17024090 et 17026090, obtenus par mélange de sirops de glucose et fructose, seul le sirop de glucose a droit à la restitution à l'exportation. -------------------------------------------------- "
---
# legal_t5_small_summ_fr model
Model for Summarization of legal text written in French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_summ_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for summarization of legal texts written in French.
### How to use
Here is how to use this model to summarize legal text written in French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "LA COMMISSION DES COMMUNAUTÉS EUROPÉENNES, vu le traité instituant la Communauté européenne, vu le règlement (CE) no 1784/2003 du Conseil du 29 septembre 2003 portant organisation commune des marchés dans le secteur des céréales [1], et notamment son article 13, paragraphe 3, vu le règlement (CE) no 1785/2003 du Conseil du 29 septembre 2003 portant organisation commune du marché du riz [2], et notamment son article 14, paragraphe 3, considérant ce qui suit: (1) Conformément à l'article 13, paragraphe 1, du règlement (CE) no 1784/2003 et à l'article 14, paragraphe 1, du règlement (CE) no 1785/2003, la différence entre les cours ou les prix sur le marché mondial des produits visés à l'article 1er de chacun de ces deux règlements et les prix dans la Communauté peut être couverte par une restitution à l'exportation. (2) Le règlement (CE) no 1043/2005 de la Commission du 30 juin 2005 portant application du règlement (CE) no 3448/93 du Conseil en ce qui concerne le système d’octroi des restitutions à l'exportation pour certains produits agricoles exportés sous forme de marchandises ne relevant pas de l'annexe I du traité ainsi que les critères de fixation de leurs montants [3] a spécifié ceux de ces produits pour lesquels il y a lieu de fixer un taux de restitution applicable lors de leur exportation sous forme de marchandises reprises, selon le cas, à l'annexe III du règlement (CE) no 1784/2003 ou à l'annexe IV du règlement (CE) no 1785/2003. (3) Conformément à l'article 14, paragraphe 1, du règlement (CE) no 1043/2005, le taux de la restitution par 100 kilogrammes de chacun des produits de base considérés doit être fixé chaque mois. (4) Les engagements pris en matière de restitutions pouvant être octroyées à l'exportation de produits agricoles incorporés dans des marchandises ne relevant pas de l'annexe I du traité peuvent être mis en péril par la fixation à l'avance de taux de restitution élevés. Il convient, dès lors, de prendre des mesures de sauvegarde dans ces situations sans empêcher pour autant la conclusion de contrats à long terme. La fixation d'un taux de restitution spécifique pour la fixation à l'avance des restitutions est une mesure permettant de rencontrer ces différents objectifs. (5) À la suite de l'arrangement entre la Communauté européenne et les États-Unis d'Amérique concernant les exportations de pâtes alimentaires de la Communauté aux États-Unis approuvé par la décision 87/482/CEE du Conseil [4], il est nécessaire de différencier la restitution pour les marchandises relevant des codes NC 19021100 et 190219 selon leur destination. (6) Conformément à l'article 15, paragraphes 2 et 3, du règlement (CE) no 1043/2005, il y a lieu de fixer un taux de restitution à l'exportation réduit, compte tenu du montant de la restitution à la production applicable, en vertu du règlement (CEE) no 1722/93 de la Commission [5], au produit de base mis en œuvre, valable au cours de la période présumée de fabrication des marchandises. (7) Les boissons spiritueuses sont considérées comme moins sensibles au prix des céréales mises en œuvre pour leur fabrication. Toutefois, le protocole 19 du traité d'adhésion du Royaume-Uni, de l'Irlande et du Danemark prévoit que des mesures nécessaires doivent être arrêtées afin de faciliter l'utilisation des céréales communautaires pour la fabrication de boissons spiritueuses obtenues à partir de céréales. Il convient donc d'adapter le taux de restitution applicable aux céréales exportées sous forme de boissons spiritueuses. (8) Le comité de gestion des céréales n'a pas émis d'avis dans le délai imparti par son président, A ARRÊTÉ LE PRÉSENT RÈGLEMENT: Article premier Les taux des restitutions applicables aux produits de base figurant à l'annexe I du règlement (CE) no 1043/2005 et à l'article 1er du règlement (CE) no 1784/2003 ou à l'article 1er du règlement (CE) no 1785/2003 modifié, qui sont exportés sous forme de marchandises reprises respectivement à l'annexe III du règlement (CE) no 1784/2003 ou à l'annexe IV du règlement (CE) no 1785/2003, sont fixés comme indiqué à l'annexe du présent règlement. Article 2 Le présent règlement entre en vigueur le 23 septembre 2005. Le présent règlement est obligatoire dans tous ses éléments et directement applicable dans tout État membre. Fait à Bruxelles, le 22 septembre 2005. Par la Commission Günter Verheugen Vice-président [1] JO L 270 du 21.10.2003, p. 78. [2] JO L 270 du 21.10.2003, p. 96. [3] JO L 172 du 5.7.2005, p. 24. [4] JO L 275 du 29.9.1987, p. 36. [5] JO L 159 du 1.7.1993, p. 112. Règlement modifié en dernier lieu par le règlement (CE) no 1584/2004 (JO L 280 du 31.8.2004, p. 11). -------------------------------------------------- ANNEXE Taux des restitutions applicables à compter du 23 septembre 2005 à certains produits des secteurs des céréales et du riz exportés sous forme de marchandises ne relevant pas de l'annexe I du traité [1] (en EUR/100 kg) | Code NC | Désignation des marchandises | Taux de la restitution par 100 kg du produit de base | En cas de fixation à l'avance des restitutions | Autres | 10011000 | Froment (blé) dur: | | | – en cas d'exportation de marchandises relevant des codes NC 190211 et 190219 vers les États-Unis d'Amérique | — | — | – dans les autres cas | — | — | 10019099 | Froment (blé) tendre et méteil: | | | – en cas d'exportation de marchandises relevant des codes NC 190211 et 190219 vers les États-Unis d'Amérique | — | — | – dans les autres cas: | | | – – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | — | — | – – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | — | — | – – dans les autres cas | — | — | 10020000 | Seigle | — | — | 10030090 | Orge | | | – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | — | — | – dans les autres cas | — | — | 10040000 | Avoine | — | — | 10059000 | Maïs, mis en œuvre sous forme de: | | | – amidon: | | | – – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | 2,994 | 3,150 | – – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | – – dans les autres cas | 4,615 | 4,615 | – glucose, sirop de glucose, maltodextrine, sirop de maltodextrine des codes NC 17023051, 17023059, 17023091, 17023099, 17024090, 17029050, 17029075, 17029079, 21069055: | | | – – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | 1,840 | 1,996 | – – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 1,776 | 1,776 | – – dans les autres cas | 3,461 | 3,461 | – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | – autres (y compris en l'état) | 4,615 | 4,615 | Fécule de pommes de terre du code NC 11081300 assimilée à un produit issu de la transformation du maïs: | | | – en cas d'application de l'article 15, paragraphe 3, du règlement (CE) no 1043/2005 | 2,435 | 2,585 | – en cas d'exportation de marchandises relevant du sous-chapitre 2208 | 2,368 | 2,368 | – dans les autres cas | 4,615 | 4,615 | ex100630 | Riz blanchi: | | | – à grains ronds | — | — | – à grains moyens | — | — | – à grains longs | — | — | 10064000 | Riz en brisures | — | — | 10070090 | Sorgho à grains (à l'excl. du sorgho à grains, hybride, destiné à l'ensemencement) | — | — | [1] Les taux prévus à la présente annexe ne s’appliquent pas avec effet au 1er octobre 2004 aux exportations vers la Bulgarie et avec effet au 1er février 2005 aux marchandises visées aux tableaux I et II du Protocole no 2 de l’Accord entre la Communauté économique européenne et la Confédération suisse du 22 juillet 1972 qui sont exportées vers la Confédération suisse ou la principauté de Liechtenstein. [2] En ce qui concerne les produits agricoles obtenus par transformation d’un produit de base et/ou de produits assimilés, les coefficients fixés à l’annexe V du règlement (CE) no 1043/2005 de la Commission s’appliquent. [3] La marchandise concernée relève du code NC 35051050. [4] Marchandises reprises à l'annexe III du règlement (CE) no 1784/2003 ou visées à l'article 2 du règlement (CEE) no 2825/93 (JO L 258 du 16.10.1993, p. 6). [5] Pour les sirops des codes NC 17023099, 17024090 et 17026090, obtenus par mélange de sirops de glucose et fructose, seul le sirop de glucose a droit à la restitution à l'exportation. -------------------------------------------------- "
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_summ_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | Rouge1 | Rouge2 | Rouge Lsum |
|:-----:|:-----:|:-----:|:-----:|
| legal_t5_small_summ_fr | 77.1|67.97 |75.74|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_fr_small_finetuned | 16e8712ad5428ecc3215d5374168d8bbcf52f6ae | 2021-06-23T11:34:20.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Cszech French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech French model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_cs_fr_small_finetuned | 3 | null | transformers | 20,909 |
---
language: Cszech French
tags:
- translation Cszech French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "9:00 - 10:50 Komise (včetně odpovědí)"
---
# legal_t5_small_trans_cs_fr_small_finetuned model
Model on translating legal text from Cszech to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to French.
### How to use
Here is how to use this model to translate legal text from Cszech to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_fr_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "9:00 - 10:50 Komise (včetně odpovědí)"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_fr_small_finetuned | 50.717|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_cs | 96c2d47ce9a8feb16d5be3594b665273f6c5b28f | 2021-06-23T11:37:25.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Cszech model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_de_cs | 3 | null | transformers | 20,910 |
---
language: Deustch Cszech
tags:
- translation Deustch Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "17. empfiehlt die Einführung einer spezifischen Strategie zur Unterstützung neuer und demokratisch gewählter Parlamente im Hinblick auf eine dauerhafte Verankerung von Demokratie, Rechtsstaatlichkeit und guter Staatsführung;"
---
# legal_t5_small_trans_de_cs model
Model on translating legal text from Deustch to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Cszech.
### How to use
Here is how to use this model to translate legal text from Deustch to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "17. empfiehlt die Einführung einer spezifischen Strategie zur Unterstützung neuer und demokratisch gewählter Parlamente im Hinblick auf eine dauerhafte Verankerung von Demokratie, Rechtsstaatlichkeit und guter Staatsführung;"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_cs | 44.07|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_cs_small_finetuned | 0e7e619ff88a24f6e08b2460b6dbb9e38953da93 | 2021-06-23T09:27:15.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Cszech model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_de_cs_small_finetuned | 3 | null | transformers | 20,911 |
---
language: Deustch Cszech
tags:
- translation Deustch Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Der Rahmenbeschluss sieht ein beschleunigtes Verfahren für die Anerkennung und Vollstreckung von freiheitsentziehenden Maßnahmen oder Maßnahmen der Sicherung (bei Unzurechnungsfähigkeit oder verminderter Schuldfähigkeit), die von einem Gericht eines anderen Mitgliedstaats gegen eine Person verhängt wurden, durch einen Mitgliedstaat vor, dessen Staatsangehörigkeit die Person besitzt, in dem sie ihren rechtmäßigen Aufenthalt hat oder zu dem sie enge Verbindungen hat."
---
# legal_t5_small_trans_de_cs_small_finetuned model
Model on translating legal text from Deustch to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Cszech.
### How to use
Here is how to use this model to translate legal text from Deustch to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_cs_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Der Rahmenbeschluss sieht ein beschleunigtes Verfahren für die Anerkennung und Vollstreckung von freiheitsentziehenden Maßnahmen oder Maßnahmen der Sicherung (bei Unzurechnungsfähigkeit oder verminderter Schuldfähigkeit), die von einem Gericht eines anderen Mitgliedstaats gegen eine Person verhängt wurden, durch einen Mitgliedstaat vor, dessen Staatsangehörigkeit die Person besitzt, in dem sie ihren rechtmäßigen Aufenthalt hat oder zu dem sie enge Verbindungen hat."
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_cs_small_finetuned | 43.750|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_es_small_finetuned | 83d0afbd5092fc1f505f96d4c8f6bd26c3f3281d | 2021-06-23T09:29:41.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Spanish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_de_es_small_finetuned | 3 | null | transformers | 20,912 |
---
language: Deustch Spanish
tags:
- translation Deustch Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Bei einer Kombination von Artikel 124 Absatz 14 mit Artikel 136 AEUV scheint die in den Artikeln 121 und 126 AEUV"
---
# legal_t5_small_trans_de_es_small_finetuned model
Model on translating legal text from Deustch to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Spanish.
### How to use
Here is how to use this model to translate legal text from Deustch to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_es_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Bei einer Kombination von Artikel 124 Absatz 14 mit Artikel 136 AEUV scheint die in den Artikeln 121 und 126 AEUV"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_es_small_finetuned | 47.006|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_sv_small_finetuned | f7aed217e30da1ee40403bfc6e06859e51b3aef1 | 2021-06-23T09:33:24.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Swedish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_de_sv_small_finetuned | 3 | null | transformers | 20,913 |
---
language: Deustch Swedish
tags:
- translation Deustch Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Die Finanzkrise hat schonungslos offenbart, wo die Mängel in den Überwachungsverfahren der EU liegen, die eine wirksame Vorbeugung von Verstößen gegen die Haushaltsdisziplin, ausufernden Haushaltsdefiziten der Mitgliedstaaten, Ungleichgewichten im Handel und Unterschieden in der Wettbewerbsfähigkeit gewährleisten sollen."
---
# legal_t5_small_trans_de_sv_small_finetuned model
Model on translating legal text from Deustch to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Swedish.
### How to use
Here is how to use this model to translate legal text from Deustch to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_sv_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Die Finanzkrise hat schonungslos offenbart, wo die Mängel in den Überwachungsverfahren der EU liegen, die eine wirksame Vorbeugung von Verstößen gegen die Haushaltsdisziplin, ausufernden Haushaltsdefiziten der Mitgliedstaaten, Ungleichgewichten im Handel und Unterschieden in der Wettbewerbsfähigkeit gewährleisten sollen."
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_sv_small_finetuned | 41.365|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_en_sv | fee5381d21b2f542bfdd7a3adfec40b9dc7909a8 | 2021-06-23T09:39:52.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_en_sv | 3 | null | transformers | 20,914 | Entry not found |
SEBIS/legal_t5_small_trans_es_cs_small_finetuned | c5ca957bd313d34979ebdb8ba86d239c14fbf6ed | 2021-06-23T09:42:41.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Spanish Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Cszech model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_es_cs_small_finetuned | 3 | null | transformers | 20,915 |
---
language: Spanish Cszech
tags:
- translation Spanish Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Comisión (incluidas las réplicas)"
---
# legal_t5_small_trans_es_cs_small_finetuned model
Model on translating legal text from Spanish to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_es_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Cszech.
### How to use
Here is how to use this model to translate legal text from Spanish to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_cs_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Comisión (incluidas las réplicas)"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_trans_es_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_es_cs_small_finetuned | 45.094|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_es_de_small_finetuned | 7e4905de62f69ff0f2f47c0b50d0066bef687c75 | 2021-06-23T09:43:57.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Spanish Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Deustch model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_es_de_small_finetuned | 3 | null | transformers | 20,916 |
---
language: Spanish Deustch
tags:
- translation Spanish Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Manfred Weber , en nombre del Grupo PPE , al Consejo:"
---
# legal_t5_small_trans_es_de_small_finetuned model
Model on translating legal text from Spanish to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_es_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Deustch.
### How to use
Here is how to use this model to translate legal text from Spanish to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_de_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Manfred Weber , en nombre del Grupo PPE , al Consejo:"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_trans_es_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_es_de_small_finetuned | 42.063|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_es_it_small_finetuned | 86749bc4aaf906eaedad2af8cd16cb1751922550 | 2021-06-23T09:48:02.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Spanish Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Italian model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_es_it_small_finetuned | 3 | null | transformers | 20,917 |
---
language: Spanish Italian
tags:
- translation Spanish Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "El acceso a las pruebas de densitometría ósea es totalmente inadecuado."
---
# legal_t5_small_trans_es_it_small_finetuned model
Model on translating legal text from Spanish to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_es_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Italian.
### How to use
Here is how to use this model to translate legal text from Spanish to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_it_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "El acceso a las pruebas de densitometría ósea es totalmente inadecuado."
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_trans_es_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_es_it_small_finetuned | 46.422|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_es_sv | eb196185c1949dee9b7337f1c894b393c4a6128b | 2021-06-23T09:48:40.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_es_sv | 3 | null | transformers | 20,918 | Entry not found |
SEBIS/legal_t5_small_trans_fr_de | 55116970b03619d7ff16cdd58fdf30ee060271c6 | 2021-06-23T09:51:36.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Deustch model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_fr_de | 3 | null | transformers | 20,919 |
---
language: French Deustch
tags:
- translation French Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Les États membres notifient ces dispositions à la Commission au plus tard à la date mentionnée à l'article 15 et toute modification ultérieure les concernant dans les meilleurs délais."
---
# legal_t5_small_trans_fr_de model
Model on translating legal text from French to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Deustch.
### How to use
Here is how to use this model to translate legal text from French to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "Les États membres notifient ces dispositions à la Commission au plus tard à la date mentionnée à l'article 15 et toute modification ultérieure les concernant dans les meilleurs délais."
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_de | 41.33|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_de_small_finetuned | cffbffb815c9c827a715193f8af71830d97ea074 | 2021-06-23T09:52:23.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Deustch model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_fr_de_small_finetuned | 3 | null | transformers | 20,920 |
---
language: French Deustch
tags:
- translation French Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "7. demande instamment à la Commission de veiller à ce que l'objectif d'une part de 20% d'énergie renouvelable soit rendue contraignante pour les États membres par des dispositions législatives à cet effet et soit mis en œuvre d'une manière conséquente, et à ce que les États membres qui n'honorent pas leurs engagements soient frappés de lourdes sanctions; souligne la nécessité de plans d'action nationaux dans le cadre desquels chaque État membre se fixe un objectif contraignant pour chaque secteur en fonction de ses possibilités spécifiques météorologiques, géographiques et géologiques et de ses réalisations dans le passé; demande instamment à la Commission de procéder à une évaluation préalable puis intermédiaire de ces plans d'action;"
---
# legal_t5_small_trans_fr_de_small_finetuned model
Model on translating legal text from French to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Deustch.
### How to use
Here is how to use this model to translate legal text from French to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_de_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "7. demande instamment à la Commission de veiller à ce que l'objectif d'une part de 20% d'énergie renouvelable soit rendue contraignante pour les États membres par des dispositions législatives à cet effet et soit mis en œuvre d'une manière conséquente, et à ce que les États membres qui n'honorent pas leurs engagements soient frappés de lourdes sanctions; souligne la nécessité de plans d'action nationaux dans le cadre desquels chaque État membre se fixe un objectif contraignant pour chaque secteur en fonction de ses possibilités spécifiques météorologiques, géographiques et géologiques et de ses réalisations dans le passé; demande instamment à la Commission de procéder à une évaluation préalable puis intermédiaire de ces plans d'action;"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_de_small_finetuned | 41.085|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_en_small_finetuned | ed9c5a16b78534f1d69bf316ae37d294bb8bb265 | 2021-06-23T11:38:04.000Z | [
"pytorch",
"t5",
"text2text-generation",
"French English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French English model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_fr_en_small_finetuned | 3 | null | transformers | 20,921 |
---
language: French English
tags:
- translation French English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "RÉSULTAT DU VOTE FINAL EN COMMISSION"
---
# legal_t5_small_trans_fr_en_small_finetuned model
Model on translating legal text from French to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to English.
### How to use
Here is how to use this model to translate legal text from French to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_en_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "RÉSULTAT DU VOTE FINAL EN COMMISSION"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_en_small_finetuned | 51.351|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_es_small_finetuned | 6af7cde8f68ea1936273c8a51c388e27b773040f | 2021-06-23T09:55:17.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Spanish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_fr_es_small_finetuned | 3 | null | transformers | 20,922 |
---
language: French Spanish
tags:
- translation French Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "A‑t‑elle déjà engagé, ou compte-t-elle engager, la réalisation d'une étude visant, comme préconisé ci‑dessus, à recenser les principaux problèmes et les besoins spécifiques des régions ultrapériphériques en matière de transport maritime, compte tenu des caractéristiques et des besoins propres à ce secteur, dans la perspective de la réalisation des projets d'autoroutes de la mer dans lesdites régions? 2."
---
# legal_t5_small_trans_fr_es_small_finetuned model
Model on translating legal text from French to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Spanish.
### How to use
Here is how to use this model to translate legal text from French to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_es_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "A‑t‑elle déjà engagé, ou compte-t-elle engager, la réalisation d'une étude visant, comme préconisé ci‑dessus, à recenser les principaux problèmes et les besoins spécifiques des régions ultrapériphériques en matière de transport maritime, compte tenu des caractéristiques et des besoins propres à ce secteur, dans la perspective de la réalisation des projets d'autoroutes de la mer dans lesdites régions? 2."
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_es_small_finetuned | 51.202|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_cs | a3a4528dd4cae73e66629effa51abfc41831a397 | 2021-06-23T09:58:17.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Cszech model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_it_cs | 3 | null | transformers | 20,923 |
---
language: Italian Cszech
tags:
- translation Italian Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "sull'aumento dei prezzi dei prodotti alimentari"
---
# legal_t5_small_trans_it_cs model
Model on translating legal text from Italian to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Cszech.
### How to use
Here is how to use this model to translate legal text from Italian to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "sull'aumento dei prezzi dei prodotti alimentari"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_cs | 43.302|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_de_small_finetuned | 4cfacefaaf3193011ef34b1d6ba42c74ad9ee168 | 2021-06-23T10:00:06.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Deustch model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_it_de_small_finetuned | 3 | null | transformers | 20,924 |
---
language: Italian Deustch
tags:
- translation Italian Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Interventi sulla votazione:"
---
# legal_t5_small_trans_it_de_small_finetuned model
Model on translating legal text from Italian to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Deustch.
### How to use
Here is how to use this model to translate legal text from Italian to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_de_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Interventi sulla votazione:"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_de_small_finetuned | 40.524|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_fr_small_finetuned | 46378cd0ba56bb43df8db8fde0d84b886d5938f9 | 2021-06-23T10:03:39.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian French model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_it_fr_small_finetuned | 3 | null | transformers | 20,925 |
---
language: Italian French
tags:
- translation Italian French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Dichiarazioni del Consiglio e della Commissione"
---
# legal_t5_small_trans_it_fr_small_finetuned model
Model on translating legal text from Italian to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to French.
### How to use
Here is how to use this model to translate legal text from Italian to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_fr_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Dichiarazioni del Consiglio e della Commissione"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_fr_small_finetuned | 50.557|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_de | b1f004a8b75dba1bb08b438d81aded348d0b4435 | 2021-06-23T10:06:54.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Deustch model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_sv_de | 3 | null | transformers | 20,926 |
---
language: Swedish Deustch
tags:
- translation Swedish Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "b) Bekämpning av skadegörare inom skogsbruket."
---
# legal_t5_small_trans_sv_de model
Model on translating legal text from Swedish to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Deustch.
### How to use
Here is how to use this model to translate legal text from Swedish to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "b) Bekämpning av skadegörare inom skogsbruket."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_de | 40.264|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_de_small_finetuned | c072c4956c71e7d8c6baeed4faacbf3ab5a1d84e | 2021-06-23T10:07:30.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Deustch model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_sv_de_small_finetuned | 3 | null | transformers | 20,927 |
---
language: Swedish Deustch
tags:
- translation Swedish Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "G. Mäns och kvinnors förmåga att delta på lika villkor i det politiska livet och i beslutsfattandet är en grundläggande förutsättning för en verklig demokrati."
---
# legal_t5_small_trans_sv_de_small_finetuned model
Model on translating legal text from Swedish to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Deustch.
### How to use
Here is how to use this model to translate legal text from Swedish to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_de_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "G. Mäns och kvinnors förmåga att delta på lika villkor i det politiska livet och i beslutsfattandet är en grundläggande förutsättning för en verklig demokrati."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_de_small_finetuned | 40.240|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_fr | 297f1cc7013bc3ebc54a8d4e047abcb9df3f4bc3 | 2021-06-23T10:10:34.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish French model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_sv_fr | 3 | null | transformers | 20,928 |
---
language: Swedish French
tags:
- translation Swedish French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Kunden måste ha rätt att avsäga sig information i skriftlig form."
---
# legal_t5_small_trans_sv_fr model
Model on translating legal text from Swedish to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to French.
### How to use
Here is how to use this model to translate legal text from Swedish to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Kunden måste ha rätt att avsäga sig information i skriftlig form."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_fr | 47.623|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_it_small_finetuned | 8bba44e37a085371f4cc15c497799e56228f391b | 2021-06-23T11:38:41.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Italian model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_sv_it_small_finetuned | 3 | null | transformers | 20,929 |
---
language: Swedish Italian
tags:
- translation Swedish Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "– med beaktande av rådet beslut om Syrien av den 12 april, 9 och 23 maj, 20 och 25 juni samt den 2 september 2011 och av uttalandena från unionens höga representant av den 9, 23 och 29 april, 9 maj, 6, 9 och 11 juni, 9 och 31 juli, 1, 4, 18 och 30 augusti samt den 2 september 2011 om en utvidgning av de restriktiva åtgärderna mot den syriska regimen,"
---
# legal_t5_small_trans_sv_it_small_finetuned model
Model on translating legal text from Swedish to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Italian.
### How to use
Here is how to use this model to translate legal text from Swedish to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_it_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "– med beaktande av rådet beslut om Syrien av den 12 april, 9 och 23 maj, 20 och 25 juni samt den 2 september 2011 och av uttalandena från unionens höga representant av den 9, 23 och 29 april, 9 maj, 6, 9 och 11 juni, 9 och 31 juli, 1, 4, 18 och 30 augusti samt den 2 september 2011 om en utvidgning av de restriktiva åtgärderna mot den syriska regimen,"
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_it_small_finetuned | 42.575|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
Sabokou/squad-qg-gen | a24ad059a9529473de57943144b9811550f60482 | 2022-01-04T09:21:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Sabokou | null | Sabokou/squad-qg-gen | 3 | null | transformers | 20,930 | Entry not found |
SaulLu/clip-vit-base-patch32 | 774a92e60dd54a3ae178cd128687c4080ea06709 | 2022-01-07T17:53:14.000Z | [
"pytorch",
"tf",
"jax",
"clip",
"feature-extraction",
"arxiv:2103.00020",
"arxiv:1908.04913",
"transformers",
"vision"
] | feature-extraction | false | SaulLu | null | SaulLu/clip-vit-base-patch32 | 3 | null | transformers | 20,931 | ---
tags:
- vision
---
# Model Card: CLIP
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md).
## Model Details
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
### Model Date
January 2021
### Model Type
The base model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. There is also a variant of the model where the ResNet image encoder is replaced with a Vision Transformer.
### Model Version
Initially, we’ve released one CLIP model based on the Vision Transformer architecture equivalent to ViT-B/32, along with the RN50 model, using the architecture equivalent to ResNet-50.
*This port does not include the ResNet model.*
Please see the paper linked below for further details about their specification.
### Documents
- [Blog Post](https://openai.com/blog/clip/)
- [CLIP Paper](https://arxiv.org/abs/2103.00020)
### Use with Transformers
```python3
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
## Model Use
### Intended Use
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
#### Primary intended uses
The primary intended users of these models are AI researchers.
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
### Out-of-Scope Use Cases
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
## Data
The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
### Data Mission Statement
Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.
## Performance and Limitations
### Performance
We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:
- Food101
- CIFAR10
- CIFAR100
- Birdsnap
- SUN397
- Stanford Cars
- FGVC Aircraft
- VOC2007
- DTD
- Oxford-IIIT Pet dataset
- Caltech101
- Flowers102
- MNIST
- SVHN
- IIIT5K
- Hateful Memes
- SST-2
- UCF101
- Kinetics700
- Country211
- CLEVR Counting
- KITTI Distance
- STL-10
- RareAct
- Flickr30
- MSCOCO
- ImageNet
- ImageNet-A
- ImageNet-R
- ImageNet Sketch
- ObjectNet (ImageNet Overlap)
- Youtube-BB
- ImageNet-Vid
## Limitations
CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.
### Bias and Fairness
We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).
We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.
## Feedback
### Where to send questions or comments about the model
Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9) |
SaulLu/test-model | 55d26e0d133614cea32eedffe8857cee9c702a1e | 2021-05-28T12:28:31.000Z | [
"pytorch",
"albert",
"pretraining",
"transformers"
] | null | false | SaulLu | null | SaulLu/test-model | 3 | null | transformers | 20,932 | ---
language:
-
-
thumbnail:
tags:
-
-
-
license:
datasets:
-
-
metrics:
-
-
---
# sahajBERT News Category Classification
## Model description
You can embed local or remote images using ``
## Intended uses & limitations
#### How to use
```python
# You can include sample code which will be formatted
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Describe the data you used to train the model.
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
## Training procedure
### Collaborative training procedure
[here](https://huggingface.co/albertvillanova)
###
Preprocessing, hardware used, hyperparameters...
## Eval results
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020}
}
``` |
SauravMaheshkar/bert-large-uncased-whole-word-masking-chaii | 6a88e232e544a824a2c615a33d8c4e9964916b8d | 2021-10-14T14:29:25.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/bert-large-uncased-whole-word-masking-chaii | 3 | null | transformers | 20,933 | Entry not found |
SauravMaheshkar/clr-finetuned-xlm-roberta-base | b967aaa565ab786b014c9d1a1570acd005c21dff | 2021-09-23T15:57:48.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"dataset:Commonlit-Readibility",
"transformers",
"kaggle",
"license:cc0-1.0",
"autotrain_compatible"
] | fill-mask | false | SauravMaheshkar | null | SauravMaheshkar/clr-finetuned-xlm-roberta-base | 3 | null | transformers | 20,934 | ---
thumbnail: https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true
tags:
- kaggle
license: cc0-1.0
datasets:
- Commonlit-Readibility
---

# FineTuning
| **Architecture** | **Weights** | **Training Loss** | **Validation Loss** |
|:-----------------------:|:---------------:|:----------------:|:----------------------:|
| roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** |
| bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 |
| albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 |
| xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 |
| bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 |
| albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 |
| roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
|
SauravMaheshkar/clr-pretrained-electra-base | 542a5a7509e14499ff0333aaba9663a31f03c5ce | 2021-09-23T15:57:58.000Z | [
"pytorch",
"electra",
"pretraining",
"dataset:Commonlit-Readibility",
"transformers",
"kaggle",
"license:cc0-1.0"
] | null | false | SauravMaheshkar | null | SauravMaheshkar/clr-pretrained-electra-base | 3 | null | transformers | 20,935 | ---
thumbnail: https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true
tags:
- kaggle
license: cc0-1.0
datasets:
- Commonlit-Readibility
metrics:
- Perplexity
---

# PreTraining
| **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** |
|:-----------------------:|:---------------:|:----------------:|:----------------------:|
| roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** |
| bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 |
| electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 |
| albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 |
| electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 |
| electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 |
| distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
|
SauravMaheshkar/distilbert-base-uncased-distilled-chaii | 15f7be7d75493b00fd884e5953cbf6bbf7ac26c4 | 2021-10-14T12:53:25.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/distilbert-base-uncased-distilled-chaii | 3 | null | transformers | 20,936 | Entry not found |
SauravMaheshkar/roberta-base-chaii | ef1e6573dbb6be73c66ae1bfb1204d10d746773d | 2021-10-14T12:35:50.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/roberta-base-chaii | 3 | null | transformers | 20,937 | Entry not found |
SauravMaheshkar/xlm-multi-roberta-large-chaii | 9b30328adb52ef090fc40310f2c43ee1ba0df935 | 2021-10-13T16:53:05.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/xlm-multi-roberta-large-chaii | 3 | null | transformers | 20,938 | Entry not found |
SauravMaheshkar/xlm-roberta-large-chaii | 8b185df2539988447b68dd93589b7003ffdb4d79 | 2021-10-14T06:15:38.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/xlm-roberta-large-chaii | 3 | null | transformers | 20,939 | Entry not found |
Science-geek32/DialoGPT-small-doctor2.0 | a89100ff07209031feffef9caf9e5f471c33925d | 2021-10-19T23:18:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Science-geek32 | null | Science-geek32/DialoGPT-small-doctor2.0 | 3 | null | transformers | 20,940 | ---
tags:
- conversational
---
13th doctor model DialoGPT-small |
ScottaStrong/DialogGPT-small-joshua | e69536475aeff7777bf7a0268e2f72c3d9c4e645 | 2021-06-16T21:40:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | ScottaStrong | null | ScottaStrong/DialogGPT-small-joshua | 3 | null | transformers | 20,941 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot)
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("scottastrong/DialogGPT-small-joshua")
model = AutoModelWithLMHead.from_pretrained("scottastrong/DialogGPT-small-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
SebastianS/dummy-model | a27a33a00757c968f4a78cbfb14cd3fd009e3d82 | 2021-12-24T16:44:54.000Z | [
"pytorch",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | SebastianS | null | SebastianS/dummy-model | 3 | null | transformers | 20,942 | ---
language: fr
license: mit
datasets:
- oscar
---
# dummy
this is only a dummy model originally based on RoBERT model
## intended uses and limitations
not intended to be used, same limitations as camembert-base model
## how to use
it cant be used (lol)
## training data
French subcorpus of the newly available multilingual corpus OSCAR
## training procedure
evaluated on multiple downstream tasks
## variable and metrics
not explicitly stated
## evaluation metrics
maybe OSCAR
## evaluation results
not explicitly stated
|
Sebu/dummy-model | c90b1a5be9650ff6b530f761e3c3092d35ddb4aa | 2022-01-05T13:10:04.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Sebu | null | Sebu/dummy-model | 3 | null | transformers | 20,943 | Entry not found |
Seongkyu/bert-base-cased-finetuned-squad | d373d005871586c090254b955a1a03b7ce0f225d | 2021-12-07T09:52:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Seongkyu | null | Seongkyu/bert-base-cased-finetuned-squad | 3 | null | transformers | 20,944 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0179 | 1.0 | 6194 | 0.9548 |
| 0.7277 | 2.0 | 12388 | 0.9717 |
| 0.507 | 3.0 | 18582 | 1.0458 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ShengdingHu/bitfit_t5-base_cola | 5a9acee8a6121d379b50eaf78586e8edb1ad2afb | 2022-02-23T14:03:47.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_t5-base_cola | 3 | null | transformers | 20,945 | Entry not found |
ShengdingHu/lora_t5-base_superglue-wsc.fixed | 8eb18d7e30b6f05fc14a519b1d8d8cd19422091f | 2022-02-02T10:34:47.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/lora_t5-base_superglue-wsc.fixed | 3 | null | transformers | 20,946 | Entry not found |
ShengdingHu/superglue-boolq | 619a41680e9b2d69818e3edb9476769f8a9934a9 | 2022-05-13T09:37:09.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ShengdingHu | null | ShengdingHu/superglue-boolq | 3 | null | transformers | 20,947 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: superglue-boolq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# superglue-boolq
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2098
- Accuracy: 76.7584
- Average Metrics: 76.7584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Average Metrics |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|
| No log | 0.34 | 100 | 0.2293 | 73.2722 | 73.2722 |
| No log | 0.68 | 200 | 0.2098 | 76.7584 | 76.7584 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu111
- Datasets 1.17.0
- Tokenizers 0.12.1
|
ShengdingHu/superglue-wic | f6f92c9df309199d81798af821de0ce57a0ad8c8 | 2022-02-02T10:30:20.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ShengdingHu | null | ShengdingHu/superglue-wic | 3 | null | transformers | 20,948 | Entry not found |
Shushant/ContaminationQuestionAnswering | 0922b2a2d2dabc5fab30480e126f43cb0e8ba8d4 | 2022-01-14T15:21:51.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Shushant | null | Shushant/ContaminationQuestionAnswering | 3 | null | transformers | 20,949 | Entry not found |
Sid51/Chan | 0569544608d8a801c85ff8f08dbf94f816224764 | 2021-06-10T20:30:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Sid51 | null | Sid51/Chan | 3 | null | transformers | 20,950 | Entry not found |
Simovod/simRU | 4c6efb22fdfcabdc7383d560c148b37d219795ba | 2021-08-06T13:45:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Simovod | null | Simovod/simRU | 3 | null | transformers | 20,951 | Entry not found |
Siyris/DialoGPT-medium-SIY | 8b960785c1d14bc04857c70b991ea140ce5d0d03 | 2021-07-05T06:55:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | Siyris | null | Siyris/DialoGPT-medium-SIY | 3 | null | transformers | 20,952 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on a customized various spiritual texts and mixed with various different character personalities.
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on the energy complex known as Ra. Some text has been changed from the original with the intention of making it fit our discord server better. I've also trained it on various channeling experiences. I'm testing mixing this dataset with character from popular shows with the intention of creating a more diverse dialogue.
I built a Discord AI chatbot based on this model for internal use within Siyris, Inc.
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("Siyris/DialoGPT-medium-SIY")
model = AutoModelWithLMHead.from_pretrained("Siyris/DialoGPT-medium-SIY")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("SIY: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
Siyris/SIY | f7fc21dc57868be62bf370d4a9fca88dc66d4005 | 2021-06-28T08:25:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | Siyris | null | Siyris/SIY | 3 | null | transformers | 20,953 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on a customized version of The Law of One.
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on the energy complex known as Ra. Some text has been changed from the original with the intention of making it fit our discord server better.
I built a Discord AI chatbot based on this model for internal use within Siyris, Inc.
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("Siyris/SIY")
model = AutoModelWithLMHead.from_pretrained("Siyris/SIY")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("SIY: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
SmokeAndAsh/DialoGPT-small-sokka | c7ff38553fc27e5ca13d58fdb83ecdcbbf00dde9 | 2022-02-16T01:23:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | SmokeAndAsh | null | SmokeAndAsh/DialoGPT-small-sokka | 3 | null | transformers | 20,954 | ---
tags:
- conversational
---
# Sokka DialoGPT Model |
Souranil/VAE | abc07b4b0021d2c1805a6d099a82af670ffa668a | 2022-02-18T11:32:27.000Z | [
"pytorch",
"transformers",
"license:apache-2.0"
] | null | false | Souranil | null | Souranil/VAE | 3 | null | transformers | 20,955 | ---
license: apache-2.0
---
### VAE with Pytorch-Lightning
This is inspired from vae-playground. This is an example where we test out vae and conv_vae models with multiple datasets
like MNIST, celeb-a and MNIST-Fashion datasets.
This also comes with an example streamlit app & deployed at huggingface.
## Model Training
You can train the VAE models by using `train.py` and editing the `config.yaml` file. \
Hyperparameters to change are:
- model_type [vae|conv_vae]
- alpha
- hidden_dim
- dataset [celeba|mnist|fashion-mnist]
There are other configurations that can be changed if required like height, width, channels etc. It also contains the pytorch-lightning configs as well.
|
Spectrox/emmybot | f34931a9b74f61d76afda71f888fde451a8f19f2 | 2021-12-29T02:21:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Spectrox | null | Spectrox/emmybot | 3 | null | transformers | 20,956 | ---
tags:
- conversational
---
#EmmyBot |
Stabley/DialoGPT-small-evelynn | 984c0a78572ab7afda0918d761d51679395df35f | 2021-08-27T21:50:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Stabley | null | Stabley/DialoGPT-small-evelynn | 3 | null | transformers | 20,957 | ---
tags:
- conversational
---
# Evelynn DialoGPT Model |
StephennFernandes/XLS-R-300m-marathi | e211ad503bd78fda2eb24f669233ef9ba911ff8d | 2022-02-11T04:09:56.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | StephennFernandes | null | StephennFernandes/XLS-R-300m-marathi | 3 | null | transformers | 20,958 | Entry not found |
Sunbird/sunbird-en-mul | 0d156d1bfe552f72643241c16484b061cbd29327 | 2022-03-30T13:30:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Sunbird | null | Sunbird/sunbird-en-mul | 3 | null | transformers | 20,959 | Entry not found |
SvyatoslavA/model_awara_text | 07310895936b13f5257b4cd004acec2333f3f61d | 2022-01-20T10:58:16.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | SvyatoslavA | null | SvyatoslavA/model_awara_text | 3 | null | transformers | 20,960 | Entry not found |
T-Systems-onsite/cross-de-ru-roberta-sentence-transformer | 3b8059fb109c1825f2ee6ed0e8c2d0c2a6a76a62 | 2022-06-28T19:56:26.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-de-ru-roberta-sentence-transformer | 3 | null | transformers | 20,961 | Entry not found |
T-Systems-onsite/cross-en-de-it-roberta-sentence-transformer | f544fa4d18680d91c5bc1e48ccb706f4777274b0 | 2022-06-28T19:56:54.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-en-de-it-roberta-sentence-transformer | 3 | null | transformers | 20,962 | Entry not found |
T-Systems-onsite/cross-en-pl-it-roberta-sentence-transformer | 206c666c8a686962de41fc13fb1dad7373825860 | 2020-12-29T07:30:25.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-en-pl-it-roberta-sentence-transformer | 3 | null | transformers | 20,963 | Entry not found |
T-Systems-onsite/cross-en-pl-roberta-sentence-transformer | 68eedc16086a239b743ffd64821a6facc35abbc8 | 2022-06-28T19:42:03.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-en-pl-roberta-sentence-transformer | 3 | null | transformers | 20,964 | Entry not found |
T-Systems-onsite/cross-en-pt-roberta-sentence-transformer | 5186195ca2fb7aaf26f29ee82d44218bb40347b4 | 2021-04-06T19:11:51.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-en-pt-roberta-sentence-transformer | 3 | null | transformers | 20,965 | Entry not found |
Taekyoon/dpr_context | 2e7c191f434760ca8228ed333e252a5b9c070f44 | 2022-02-06T13:03:59.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Taekyoon | null | Taekyoon/dpr_context | 3 | null | transformers | 20,966 | Entry not found |
Taekyoon/dpr_question | 3da06bc6ead697241e5e50e45e06a6ee5f7c028f | 2022-02-06T13:02:29.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Taekyoon | null | Taekyoon/dpr_question | 3 | null | transformers | 20,967 | Entry not found |
TalTechNLP/xls-r-300m-et | a1a327b54c3ecbb4750ce8c75aa7ee996030753f | 2022-05-18T09:57:20.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"et",
"transformers",
"audio",
"hf-asr-leaderboard",
"license:cc-by-4.0",
"model-index"
] | automatic-speech-recognition | false | TalTechNLP | null | TalTechNLP/xls-r-300m-et | 3 | 1 | transformers | 20,968 | ---
license: cc-by-4.0
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
language: et
model-index:
- name: xls-r-300m-et
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: et
metrics:
- name: Test WER
type: wer
value: 12.520395591222402
- name: Test CER
type: cer
value: 2.7091152438624897
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: et
metrics:
- name: Test WER
type: wer
value: 13.38447882323104
- name: Test CER
type: cer
value: 2.9816686199500255
---
# XLS-R-300m-ET
This is a XLS-R-300M model [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) finetuned on around 800 hours of diverse Estonian data.
## Model description
This is a general-purpose Estonian ASR model trained in the Lab of Language Technology at TalTech. It consists of only the CTC-based end-to-end model, no language model is currently provided.
## Intended uses & limitations
This model is intended for general-purpose speech recognition, such as broadcast conversations, interviews, talks, etc.
## How to use
TODO
#### Limitations and bias
Since this model was trained on mostly broadcast speech and texts from the web, it might have problems correctly decoding the following:
* Speech containing technical and other domain-specific terms
* Children's speech
* Non-native speech
* Speech recorded under very noisy conditions or with a microphone far from the speaker
* Very spontaneous and overlapping speech
## Training data
Acoustic training data:
| Type | Amount (h) |
|-----------------------|:------:|
| Broadcast speech | 591 |
| Spontaneous speech | 53 |
| Elderly speech corpus | 53 |
| Talks, lectures | 49 |
| Parliament speeches | 31 |
| *Total* | *761* |
## Training procedure
Finetuned using Fairseq.
## Evaluation results
### WER
|Dataset | WER |
|---|---|
| jutusaated.devset | 7.9 |
| jutusaated.testset | 6.1 |
| Common Voice 6.1 | 12.5 |
| Common Voice 8.0 | 13.4 |
|
Tarang1998/autonlp-pegasus-21664560 | d58ee350774d83fe985d567c84e4a04e850db564 | 2021-10-19T05:22:41.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"unk",
"dataset:Tarang1998/autonlp-data-pegasus",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | Tarang1998 | null | Tarang1998/autonlp-pegasus-21664560 | 3 | null | transformers | 20,969 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Tarang1998/autonlp-data-pegasus
co2_eq_emissions: 5.680803958729511
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 21664560
- CO2 Emissions (in grams): 5.680803958729511
## Validation Metrics
- Loss: 1.7488420009613037
- Rouge1: 38.1491
- Rouge2: 18.6257
- RougeL: 26.8448
- RougeLsum: 32.2433
- Gen Len: 49.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Tarang1998/autonlp-pegasus-21664560
``` |
TeamAlerito/gti-coco-en | 5319378f7035b722c785764e1eaaf9af7941945c | 2021-11-17T14:44:54.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"vision-encoder-decoder",
"generic",
"image-classification"
] | image-classification | false | TeamAlerito | null | TeamAlerito/gti-coco-en | 3 | null | generic | 20,970 | ---
tags:
- image-classification
library_name: generic
---
## Example
The model is by no means a state-of-the-art model, but nevertheless
produces reasonable image captioning results. It was mainly fine-tuned
as a proof-of-concept for the 🤗 FlaxVisionEncoderDecoder Framework.
The model can be used as follows:
**In PyTorch**
```python
import torch
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, VisionEncoderDecoderModel
loc = "ydshieh/vit-gpt2-coco-en"
feature_extractor = ViTFeatureExtractor.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
model = VisionEncoderDecoderModel.from_pretrained(loc)
model.eval()
def predict(image):
pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values
with torch.no_grad():
output_ids = model.generate(pixel_values, max_length=16, num_beams=4, return_dict_in_generate=True).sequences
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
preds = [pred.strip() for pred in preds]
return preds
# We will verify our results on an image of cute cats
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
with Image.open(requests.get(url, stream=True).raw) as image:
preds = predict(image)
print(preds)
# should produce
# ['a cat laying on top of a couch next to another cat']
```
**In Flax**
```python
import jax
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, FlaxVisionEncoderDecoderModel
loc = "ydshieh/vit-gpt2-coco-en"
feature_extractor = ViTFeatureExtractor.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
model = FlaxVisionEncoderDecoderModel.from_pretrained(loc)
gen_kwargs = {"max_length": 16, "num_beams": 4}
# This takes sometime when compiling the first time, but the subsequent inference will be much faster
@jax.jit
def generate(pixel_values):
output_ids = model.generate(pixel_values, **gen_kwargs).sequences
return output_ids
def predict(image):
pixel_values = feature_extractor(images=image, return_tensors="np").pixel_values
output_ids = generate(pixel_values)
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
preds = [pred.strip() for pred in preds]
return preds
# We will verify our results on an image of cute cats
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
with Image.open(requests.get(url, stream=True).raw) as image:
preds = predict(image)
print(preds)
# should produce
# ['a cat laying on top of a couch next to another cat']
``` |
TheLongSentance/t5_mimic_final_chkpnt15000 | eeb985dffbefedf790c93f1d4ed05f300701c2f8 | 2021-09-16T11:10:41.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers"
] | feature-extraction | false | TheLongSentance | null | TheLongSentance/t5_mimic_final_chkpnt15000 | 3 | null | transformers | 20,971 | Entry not found |
ThePixOne/retBERT | bcd505e5f1c64d2e4ffd2017e9f15f94091b409f | 2022-01-11T18:24:24.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ThePixOne | null | ThePixOne/retBERT | 3 | 1 | transformers | 20,972 | BERT finetuned on wallstreetbets subreddit |
Tito/T5small_model3_lr_2e-3-finetuned-en-to-de | 82f7e4b6bdd58b6db90ea01a8d7010190c6ff096 | 2021-12-07T01:01:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Tito | null | Tito/T5small_model3_lr_2e-3-finetuned-en-to-de | 3 | null | transformers | 20,973 | Entry not found |
Tommi/wav2vec2-large-xlsr-53-finnish | ce901966622ec09616429d502fe009328046761c | 2021-07-05T17:57:47.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"dataset:common_voice",
"dataset:CSS10",
"dataset:Finnish parliament session 2",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Tommi | null | Tommi/wav2vec2-large-xlsr-53-finnish | 3 | null | transformers | 20,974 | ---
language: fi
datasets:
- common_voice
- CSS10
- Finnish parliament session 2
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Finnish XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fi
type: common_voice
args: fi
metrics:
- name: Test WER
type: wer
value: 35.43
---
# Wav2Vec2-Large-XLSR-53-Finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Finnish using the [Common Voice](https://huggingface.co/datasets/common_voice), [CSS10](https://www.kaggle.com/bryanpark/finnish-single-speaker-speech-dataset) and [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) datasets.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import numpy as np
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Tommi/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("Tommi/wav2vec2-large-xlsr-53-finnish")
resampler = lambda sr, y: librosa.resample(y.squeeze(), sr, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array.numpy()).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Finnish test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Tommi/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("Tommi/wav2vec2-large-xlsr-53-finnish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\"\%\'\"\�\'\...\…\–\é]'
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 35.43 %
## Training
The Common Voice `train`, `validation`, and `other` datasets were used for training as well as CSS10 and Finnish parliament session 2
The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
|
TransQuest/microtransquest-en_cs-it-smt | 5efc48e7d615d654b9c2a566be669ee0f4b8cc04 | 2021-06-04T08:20:15.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"en-cs",
"transformers",
"Quality Estimation",
"microtransquest",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | TransQuest | null | TransQuest/microtransquest-en_cs-it-smt | 3 | null | transformers | 20,975 | ---
language: en-cs
tags:
- Quality Estimation
- microtransquest
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
from transquest.algo.word_level.microtransquest.run_model import MicroTransQuestModel
import torch
model = MicroTransQuestModel("xlmroberta", "TransQuest/microtransquest-en_cs-it-smt", labels=["OK", "BAD"], use_cuda=torch.cuda.is_available())
source_tags, target_tags = model.predict([["if not , you may not be protected against the diseases . ", "ja tā nav , Jūs varat nepasargāt no slimībām . "]])
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
Transabrar/distilroberta-base-finetuned-abr | 3e78d62994b639cd3caa47bcd7853812015a8cf3 | 2021-10-07T17:41:49.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Transabrar | null | Transabrar/distilroberta-base-finetuned-abr | 3 | null | transformers | 20,976 | Entry not found |
Transabrar/scibert_scivocab_uncased-finetuned-scibero | f067206f3d34765781cc2be9b96ce57266dfd2be | 2021-10-16T11:44:09.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Transabrar | null | Transabrar/scibert_scivocab_uncased-finetuned-scibero | 3 | null | transformers | 20,977 | Entry not found |
TuhinColumbia/Creativity1 | 033dee531454e52b02e5daa1f88ab9862c77ba02 | 2021-11-02T16:51:54.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | TuhinColumbia | null | TuhinColumbia/Creativity1 | 3 | 1 | transformers | 20,978 | Entry not found |
TuhinColumbia/romancelanguagepoetry | 2711ff6aff31687e62716331ef8589f93662639a | 2021-09-10T14:42:44.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | TuhinColumbia | null | TuhinColumbia/romancelanguagepoetry | 3 | null | transformers | 20,979 | Entry not found |
TurkuNLP/wikibert-base-ar-cased | 0005da9e01c0d747f76546554d7f044e4214dd5c | 2020-05-24T19:58:38.000Z | [
"pytorch",
"transformers"
] | null | false | TurkuNLP | null | TurkuNLP/wikibert-base-ar-cased | 3 | null | transformers | 20,980 | Entry not found |
TurkuNLP/wikibert-base-fa-cased | 00e0171bc5965005c491b1ecd737bdc974e9b59f | 2020-05-24T19:59:47.000Z | [
"pytorch",
"transformers"
] | null | false | TurkuNLP | null | TurkuNLP/wikibert-base-fa-cased | 3 | null | transformers | 20,981 | Entry not found |
TurkuNLP/wikibert-base-fr-cased | 47307f3cc0c46085b9f6270d9dd4c09c1364ac18 | 2020-05-24T19:59:57.000Z | [
"pytorch",
"transformers"
] | null | false | TurkuNLP | null | TurkuNLP/wikibert-base-fr-cased | 3 | null | transformers | 20,982 | Entry not found |
TurkuNLP/wikibert-base-gl-cased | 273929cd9f7ca50102fbc5efdbe9a2f4e3a5f4ac | 2020-05-24T20:00:08.000Z | [
"pytorch",
"transformers"
] | null | false | TurkuNLP | null | TurkuNLP/wikibert-base-gl-cased | 3 | null | transformers | 20,983 | Entry not found |
TurkuNLP/wikibert-base-he-cased | 6e5df214ec504a104cb63a7e51929d2fb69426b4 | 2020-05-24T20:00:13.000Z | [
"pytorch",
"transformers"
] | null | false | TurkuNLP | null | TurkuNLP/wikibert-base-he-cased | 3 | null | transformers | 20,984 | Entry not found |
TurkuNLP/wikibert-base-id-cased | 3ce0232ddada3b45b41a925e7d9ebaad61419f05 | 2020-05-24T20:00:42.000Z | [
"pytorch",
"transformers"
] | null | false | TurkuNLP | null | TurkuNLP/wikibert-base-id-cased | 3 | null | transformers | 20,985 | Entry not found |
TurkuNLP/wikibert-base-lv-cased | 78331b5aa67c5af36d93b78478ceb5c08a551bf0 | 2020-05-24T20:01:02.000Z | [
"pytorch",
"transformers"
] | null | false | TurkuNLP | null | TurkuNLP/wikibert-base-lv-cased | 3 | null | transformers | 20,986 | Entry not found |
TurkuNLP/wikibert-base-ru-cased | 9aa8062f53695b5b3512cd52c11ca367c860397f | 2020-05-24T20:01:32.000Z | [
"pytorch",
"transformers"
] | null | false | TurkuNLP | null | TurkuNLP/wikibert-base-ru-cased | 3 | null | transformers | 20,987 | Entry not found |
TurkuNLP/wikibert-base-sv-cased | e0983169149f4d3accf4983c6a6660ae0bbc28ee | 2020-05-24T20:01:54.000Z | [
"pytorch",
"transformers"
] | null | false | TurkuNLP | null | TurkuNLP/wikibert-base-sv-cased | 3 | null | transformers | 20,988 | Entry not found |
TurkuNLP/wikibert-base-ta-cased | c70a68ba6e35a23db9d517373e39cc15a594b5e6 | 2020-05-24T20:02:00.000Z | [
"pytorch",
"transformers"
] | null | false | TurkuNLP | null | TurkuNLP/wikibert-base-ta-cased | 3 | null | transformers | 20,989 | Entry not found |
TypicaAI/magbert-lm | e51fa2a7aa942e9994560b3a1de772534e27df40 | 2020-10-01T23:18:10.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | TypicaAI | null | TypicaAI/magbert-lm | 3 | null | transformers | 20,990 | Entry not found |
Unbabel/XLM-R-10L | 23502d7d30acbb0ecb1d50dc5a2d9ef8a80714be | 2022-01-05T19:49:00.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | Unbabel | null | Unbabel/XLM-R-10L | 3 | null | transformers | 20,991 | Entry not found |
Unbabel/XLM-R-15L | b10807c8ddea27bb524283b0c226ae4270de1151 | 2022-01-05T20:25:41.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | Unbabel | null | Unbabel/XLM-R-15L | 3 | null | transformers | 20,992 | Entry not found |
Unbabel/XLM-R-16L | ead530bb756405648ff0377db729b3c46f4a01fd | 2022-01-05T20:33:46.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | Unbabel | null | Unbabel/XLM-R-16L | 3 | null | transformers | 20,993 | Entry not found |
Unbabel/XLM-R-17L | 9fbaed635f229d1a2a409b30b72bf080b6bfd9b8 | 2022-01-05T20:41:37.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | Unbabel | null | Unbabel/XLM-R-17L | 3 | null | transformers | 20,994 | Entry not found |
Unbabel/XLM-R-19L | a6eecc174d0a520cd580537c8df54b8498b2f62e | 2022-01-05T20:57:05.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | Unbabel | null | Unbabel/XLM-R-19L | 3 | null | transformers | 20,995 | Entry not found |
Unbabel/XLM-R-2L | e17ec1c317c998dc63d95794efa4dcadeddffb59 | 2022-01-05T18:57:53.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | Unbabel | null | Unbabel/XLM-R-2L | 3 | null | transformers | 20,996 | Entry not found |
Unbabel/XLM-R-3L | 2bef21606b5c93183bf1bf67f9b059dcd018391f | 2022-01-05T19:05:46.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | Unbabel | null | Unbabel/XLM-R-3L | 3 | null | transformers | 20,997 | Entry not found |
Unbabel/XLM-R-7L | c5c15da0298f597114a09172dc9fdad8d9ef9275 | 2022-01-05T19:28:54.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | Unbabel | null | Unbabel/XLM-R-7L | 3 | null | transformers | 20,998 | Entry not found |
Unbabel/XLM-R-8L | dcc4396ecaff0c35795f49792ee7368881268237 | 2022-01-05T19:35:20.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | Unbabel | null | Unbabel/XLM-R-8L | 3 | null | transformers | 20,999 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.