modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SEBIS/legal_t5_small_multitask_cs_es | 4756698308f45298462e1a66240be756cbfd46ec | 2021-06-23T10:51:58.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Cszech Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Spanish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_cs_es | 2 | null | transformers | 23,400 |
---
language: Cszech Spanish
tags:
- translation Cszech Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Antonio Tajani (místopředseda Komise) ."
---
# legal_t5_small_multitask_cs_es model
Model on translating legal text from Cszech to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_cs_es model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Spanish.
### How to use
Here is how to use this model to translate legal text from Cszech to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Antonio Tajani (místopředseda Komise) ."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_cs_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_cs_es | 48.559|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_de_es | 696c3dced92cec1ce7344b406f7e8b1594bc775a | 2021-06-23T10:54:59.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Spanish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_de_es | 2 | null | transformers | 23,401 |
---
language: Deustch Spanish
tags:
- translation Deustch Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Kugelförmige, eiförmige oder ellipsenförmige Verpackungen dürfen keine Abmessungen aufweisen, die durch eine Einklemmung im Mund oder Rachen eine Blockierung der internen Atemwege verursachen können."
---
# legal_t5_small_multitask_de_es model
Model on translating legal text from Deustch to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_de_es model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Spanish.
### How to use
Here is how to use this model to translate legal text from Deustch to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_de_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_de_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Kugelförmige, eiförmige oder ellipsenförmige Verpackungen dürfen keine Abmessungen aufweisen, die durch eine Einklemmung im Mund oder Rachen eine Blockierung der internen Atemwege verursachen können."
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_de_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_de_es | 36.458|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_en_es | 5ae7bcd312a6e32fde6aff53074659b2bd6a7895 | 2021-06-23T10:58:54.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"English Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Spanish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_en_es | 2 | null | transformers | 23,402 |
---
language: English Spanish
tags:
- translation English Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Amendment 14 Article 5, paragraph 1, point (a)"
---
# legal_t5_small_multitask_en_es model
Model on translating legal text from English to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_en_es model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Spanish.
### How to use
Here is how to use this model to translate legal text from English to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Amendment 14 Article 5, paragraph 1, point (a)"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_en_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_en_es | 37.404|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_en_fr | 17092867793e91a7f00277f57c05c70c7d2eb48e | 2021-06-23T10:59:29.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"English French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English French model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_en_fr | 2 | null | transformers | 23,403 |
---
language: English French
tags:
- translation English French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Article 2(b), sub-heading"
---
# legal_t5_small_multitask_en_fr model
Model on translating legal text from English to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_en_fr model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from English to French.
### How to use
Here is how to use this model to translate legal text from English to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Article 2(b), sub-heading"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_en_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_en_fr | 38.063|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_es_de | bd58c98268119d7bb752a7d35e0cb8508ad1f449 | 2021-06-23T11:02:08.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Spanish Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Deustch model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_es_de | 2 | null | transformers | 23,404 |
---
language: Spanish Deustch
tags:
- translation Spanish Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Estudios y publicaciones realizados por el Parlamento Europeo"
---
# legal_t5_small_multitask_es_de model
Model on translating legal text from Spanish to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_es_de model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Deustch.
### How to use
Here is how to use this model to translate legal text from Spanish to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Estudios y publicaciones realizados por el Parlamento Europeo"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_es_de model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_es_de | 41.196|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_es_en | 30d44b508150c69eb5c9b26f6d14d303279702e8 | 2021-06-23T11:03:37.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Spanish English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish English model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_es_en | 2 | null | transformers | 23,405 |
---
language: Spanish English
tags:
- translation Spanish English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "PPE-DE: 6', PSE: 6', ALDE: 5', Verts/ALE: 4', GUE/NGL: 4', IND/DEM:4', UEN: 4', NI: 4'"
---
# legal_t5_small_multitask_es_en model
Model on translating legal text from Spanish to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_es_en model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to English.
### How to use
Here is how to use this model to translate legal text from Spanish to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "PPE-DE: 6', PSE: 6', ALDE: 5', Verts/ALE: 4', GUE/NGL: 4', IND/DEM:4', UEN: 4', NI: 4'"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_es_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_es_en | 36.607|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_es_fr | 993f0a50fde4ed9e20f6b077d95fe89a98484d98 | 2021-06-23T11:04:12.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Spanish French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish French model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_es_fr | 2 | null | transformers | 23,406 |
---
language: Spanish French
tags:
- translation Spanish French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Fecha del anuncio en el Pleno"
---
# legal_t5_small_multitask_es_fr model
Model on translating legal text from Spanish to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_es_fr model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to French.
### How to use
Here is how to use this model to translate legal text from Spanish to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Fecha del anuncio en el Pleno"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_es_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_es_fr | 41.523|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_fr_it | 994b626179e6c8b4092f7bfd909fe078e4ee4e1b | 2021-06-23T11:11:18.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Italian model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_fr_it | 2 | null | transformers | 23,407 |
---
language: French Italian
tags:
- translation French Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Situation humanitaire au Soudan"
---
# legal_t5_small_multitask_fr_it model
Model on translating legal text from French to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_fr_it model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Italian.
### How to use
Here is how to use this model to translate legal text from French to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_fr_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_fr_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "Situation humanitaire au Soudan"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_fr_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_fr_it | 41.140|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_it_cs | 6db09d86255a2bda0c97ef5770c14684f9842561 | 2021-06-23T11:12:39.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Cszech model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_it_cs | 2 | null | transformers | 23,408 |
---
language: Italian Cszech
tags:
- translation Italian Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Per mobilitare il Fondo, la Commissione ha presentato all'autorità di bilancio una richiesta di storno per un importo complessivo di 667.823 EUR dalla riserva FEG (40 02 43) in stanziamenti d'impegno verso la linea di bilancio FEG."
---
# legal_t5_small_multitask_it_cs model
Model on translating legal text from Italian to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_it_cs model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Cszech.
### How to use
Here is how to use this model to translate legal text from Italian to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Per mobilitare il Fondo, la Commissione ha presentato all'autorità di bilancio una richiesta di storno per un importo complessivo di 667.823 EUR dalla riserva FEG (40 02 43) in stanziamenti d'impegno verso la linea di bilancio FEG."
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_it_cs model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_it_cs | 37.935|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_it_de | 5c07e2b05ed0d583bbf2f22d9f16f0689e67e1de | 2021-06-23T11:13:21.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Deustch model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_it_de | 2 | null | transformers | 23,409 |
---
language: Italian Deustch
tags:
- translation Italian Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "di Alyn Smith (Verts/ALE)"
---
# legal_t5_small_multitask_it_de model
Model on translating legal text from Italian to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_it_de model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Deustch.
### How to use
Here is how to use this model to translate legal text from Italian to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "di Alyn Smith (Verts/ALE)"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_it_de model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_it_de | 35.365|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_it_en | 6dc9e8687b1c907a9da4d73eb88bbc28fd70ce8a | 2021-06-23T11:13:57.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian English model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_it_en | 2 | null | transformers | 23,410 |
---
language: Italian English
tags:
- translation Italian English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Con l’adesione all'area dell'euro questo procedimento non è stato più possibile."
---
# legal_t5_small_multitask_it_en model
Model on translating legal text from Italian to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_it_en model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to English.
### How to use
Here is how to use this model to translate legal text from Italian to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Con l’adesione all'area dell'euro questo procedimento non è stato più possibile."
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_it_en model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_it_en | 36.687|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_it_fr | 9eb0bb78a50c3681ccb28c8b20cf2fddf4657c42 | 2021-06-23T11:15:25.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian French model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_it_fr | 2 | null | transformers | 23,411 |
---
language: Italian French
tags:
- translation Italian French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Gli Stati membri adottano le leggi, i regolamenti e le disposizioni amministrative necessari per ottemperare alla presente direttiva entro il 31 dicembre 2002 e ne informano immediatamente la Commissione."
---
# legal_t5_small_multitask_it_fr model
Model on translating legal text from Italian to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_it_fr model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to French.
### How to use
Here is how to use this model to translate legal text from Italian to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Gli Stati membri adottano le leggi, i regolamenti e le disposizioni amministrative necessari per ottemperare alla presente direttiva entro il 31 dicembre 2002 e ne informano immediatamente la Commissione."
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_it_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_it_fr | 41.956|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_sv_cs | 529f8011970b55d4274b2ed2a0a8ea9c3779870b | 2021-06-23T11:16:54.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Cszech model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_sv_cs | 2 | null | transformers | 23,412 |
---
language: Swedish Cszech
tags:
- translation Swedish Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Standarderna för integrerat växtskydd bör tillämpas snabbare än vad kommissionen föreskrivit."
---
# legal_t5_small_multitask_sv_cs model
Model on translating legal text from Swedish to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_sv_cs model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Cszech.
### How to use
Here is how to use this model to translate legal text from Swedish to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Standarderna för integrerat växtskydd bör tillämpas snabbare än vad kommissionen föreskrivit."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_sv_cs model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_sv_cs | 45.058|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_sv_de | 95b5fbb7eff911f4a9884fe68d4b55ebecf1004e | 2021-06-23T11:17:27.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Deustch model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_sv_de | 2 | null | transformers | 23,413 |
---
language: Swedish Deustch
tags:
- translation Swedish Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Kan kommissionen bekräfta att i Olaf‑handlingar som samlats in inom ramen för denna granskning, daterade mellan 2000 och 2004, kan följande information hittas: —"
---
# legal_t5_small_multitask_sv_de model
Model on translating legal text from Swedish to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_sv_de model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Deustch.
### How to use
Here is how to use this model to translate legal text from Swedish to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Kan kommissionen bekräfta att i Olaf‑handlingar som samlats in inom ramen för denna granskning, daterade mellan 2000 och 2004, kan följande information hittas: —"
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_sv_de model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_sv_de | 44.684|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_sv_it | 42e1f7077e2e598cd8cb54c1c05e43ee4b517304 | 2021-06-23T11:20:05.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Italian model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_sv_it | 2 | null | transformers | 23,414 |
---
language: Swedish Italian
tags:
- translation Swedish Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "De nationella tillsynsmyndigheterna får använda"
---
# legal_t5_small_multitask_sv_it model
Model on translating legal text from Swedish to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_sv_it model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Italian.
### How to use
Here is how to use this model to translate legal text from Swedish to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "De nationella tillsynsmyndigheterna får använda"
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_sv_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_sv_it | 44.242|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_summ_multitask_cs | 8f053420be42ac286e3e7dc2c8d68099784d7681 | 2021-06-23T11:24:17.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_summ_multitask_cs | 2 | null | transformers | 23,415 | Entry not found |
SEBIS/legal_t5_small_summ_multitask_de | ce281dd180f69a68098b49fe0a6634ab754891a2 | 2021-06-23T11:24:51.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_summ_multitask_de | 2 | null | transformers | 23,416 | Entry not found |
SEBIS/legal_t5_small_summ_multitask_sv | af601441b29792740ff2579650d49b3e4290c8b7 | 2021-06-23T11:28:09.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_summ_multitask_sv | 2 | null | transformers | 23,417 | Entry not found |
SEBIS/legal_t5_small_trans_cs_en_small_finetuned | a53f1d4afa2a8f3fdf50ce6e3d0da73d08a70a68 | 2021-06-23T11:31:44.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Cszech English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech English model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_cs_en_small_finetuned | 2 | null | transformers | 23,418 |
---
language: Cszech English
tags:
- translation Cszech English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "4) Seznam užívaných výrobků s obsahem PFOS: Kvůli značnému poklesu výroby PFOS po roce 2000 představují největší zdroj emisí patrně dřívější využití, která však nadále reálně existují."
---
# legal_t5_small_trans_cs_en_small_finetuned model
Model on translating legal text from Cszech to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_cs_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to English.
### How to use
Here is how to use this model to translate legal text from Cszech to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_en_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "4) Seznam užívaných výrobků s obsahem PFOS: Kvůli značnému poklesu výroby PFOS po roce 2000 představují největší zdroj emisí patrně dřívější využití, která však nadále reálně existují."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_en_small_finetuned | 56.936|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_en_small_finetuned | aceaed594443ec760ae717f2192167b78bc324e7 | 2021-06-23T09:28:23.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch English model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_de_en_small_finetuned | 2 | null | transformers | 23,419 |
---
language: Deustch English
tags:
- translation Deustch English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "In welchen anderen EU-Ländern ist von ähnlichen Listen mit Parteikadern und Regierungsmitgliedern berichtet worden, die die „Schirmherrschaft“ über Vorschläge für private von der Europäischen Union kofinanzierte Investitionen übernommen haben?"
---
# legal_t5_small_trans_de_en_small_finetuned model
Model on translating legal text from Deustch to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to English.
### How to use
Here is how to use this model to translate legal text from Deustch to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_en_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "In welchen anderen EU-Ländern ist von ähnlichen Listen mit Parteikadern und Regierungsmitgliedern berichtet worden, die die „Schirmherrschaft“ über Vorschläge für private von der Europäischen Union kofinanzierte Investitionen übernommen haben?"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_en_small_finetuned | 48.674|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_en_cs | 09a39fb266d30a98c4e47821b0a83cd1b69b141f | 2021-06-23T09:34:00.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"English Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Cszech model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_en_cs | 2 | null | transformers | 23,420 |
---
language: English Cszech
tags:
- translation English Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "1 In the countries concerned, this certainly affects the priority assigned to making progress on the issue of final disposal, particularly of highly radioactive waste and irradiated fuel elements."
---
# legal_t5_small_trans_en_cs model
Model on translating legal text from English to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Cszech.
### How to use
Here is how to use this model to translate legal text from English to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "1 In the countries concerned, this certainly affects the priority assigned to making progress on the issue of final disposal, particularly of highly radioactive waste and irradiated fuel elements."
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_cs | 50.177|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_en_cs_small_finetuned | b98a44605b47c0a75f7cad198c3d525f8d0a2420 | 2021-06-23T09:34:37.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"English Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Cszech model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_en_cs_small_finetuned | 2 | null | transformers | 23,421 |
---
language: English Cszech
tags:
- translation English Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Members present for the final vote"
---
# legal_t5_small_trans_en_cs_small_finetuned model
Model on translating legal text from English to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Cszech.
### How to use
Here is how to use this model to translate legal text from English to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_cs_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Members present for the final vote"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_cs_small_finetuned | 50.394|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_en_fr | cfffe175a9371af329bb2cd7aabd6827e5776d6f | 2021-06-23T09:37:20.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_en_fr | 2 | null | transformers | 23,422 | Entry not found |
SEBIS/legal_t5_small_trans_en_fr_small_finetuned | bcc7b4e561b8592418d094058211e833d144c8c4 | 2021-06-23T09:37:57.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"English French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English French model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_en_fr_small_finetuned | 2 | null | transformers | 23,423 |
---
language: English French
tags:
- translation English French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "recalling the decision by 14 Member States earlier this year to limit their bilateral contacts with another Member State,"
---
# legal_t5_small_trans_en_fr_small_finetuned model
Model on translating legal text from English to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to French.
### How to use
Here is how to use this model to translate legal text from English to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_fr_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "recalling the decision by 14 Member States earlier this year to limit their bilateral contacts with another Member State,"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_fr_small_finetuned | 52.476|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_es_cs | b9903638c0ec73f4b06938eb449b156b967a2ee2 | 2021-06-23T09:42:01.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_es_cs | 2 | null | transformers | 23,424 | Entry not found |
SEBIS/legal_t5_small_trans_es_de | be951c85083da3f7cd13d22977b3a802dc9b216e | 2021-06-23T09:43:21.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_es_de | 2 | null | transformers | 23,425 | Entry not found |
SEBIS/legal_t5_small_trans_es_en | 3fec25c808493406e8e992d24c42f61c445df3c7 | 2021-06-23T09:44:38.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_es_en | 2 | null | transformers | 23,426 | Entry not found |
SEBIS/legal_t5_small_trans_fr_it_small_finetuned | aff678f4c2efea41cb50744d2319f25ced78b0dc | 2021-06-23T09:56:35.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Italian model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_fr_it_small_finetuned | 2 | null | transformers | 23,427 |
---
language: French Italian
tags:
- translation French Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Le vote a lieu dans un délai de deux mois après réception de la proposition, à moins qu'à la demande de la commission compétente, d'un groupe politique ou de quarante députés au moins, le Parlement n'en décide autrement."
---
# legal_t5_small_trans_fr_it_small_finetuned model
Model on translating legal text from French to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Italian.
### How to use
Here is how to use this model to translate legal text from French to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_it_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "Le vote a lieu dans un délai de deux mois après réception de la proposition, à moins qu'à la demande de la commission compétente, d'un groupe politique ou de quarante députés au moins, le Parlement n'en décide autrement."
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_it_small_finetuned | 46.309|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_sv | 4e5959382ce531f7675e1b36cfa476d1cad77a9b | 2021-06-23T09:57:09.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Swedish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_fr_sv | 2 | null | transformers | 23,428 |
---
language: French Swedish
tags:
- translation French Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "posée conformément à l'article 43 du règlement"
---
# legal_t5_small_trans_fr_sv model
Model on translating legal text from French to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Swedish.
### How to use
Here is how to use this model to translate legal text from French to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "posée conformément à l'article 43 du règlement"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_sv | 41.9|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_fr_sv_small_finetuned | 0298f085af6b5c2fd33889bb72b8deccc236d10b | 2021-06-23T09:57:41.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Swedish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_fr_sv_small_finetuned | 2 | null | transformers | 23,429 |
---
language: French Swedish
tags:
- translation French Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Budget 2009: Section III - Commission"
---
# legal_t5_small_trans_fr_sv_small_finetuned model
Model on translating legal text from French to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Swedish.
### How to use
Here is how to use this model to translate legal text from French to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_sv_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "Budget 2009: Section III - Commission"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_sv_small_finetuned | 41.768|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_es | ef9380be56866ef0e72dd35852ac62812b07f3ef | 2021-06-23T10:01:53.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Spanish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_it_es | 2 | null | transformers | 23,430 |
---
language: Italian Spanish
tags:
- translation Italian Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Risoluzione del Parlamento europeo sulle perquisizioni effettuate ad Ankara nella sede principale dell'Associazione per i diritti dell'uomo in Turchia"
---
# legal_t5_small_trans_it_es model
Model on translating legal text from Italian to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Spanish.
### How to use
Here is how to use this model to translate legal text from Italian to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Risoluzione del Parlamento europeo sulle perquisizioni effettuate ad Ankara nella sede principale dell'Associazione per i diritti dell'uomo in Turchia"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_es | 48.998|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_es_small_finetuned | 1c0d3d21be4409163e5b94bc81fea19cc965cfd4 | 2021-06-23T10:02:27.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Spanish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_it_es_small_finetuned | 2 | null | transformers | 23,431 |
---
language: Italian Spanish
tags:
- translation Italian Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "considerando che il 28 marzo 2002 il Consiglio di sicurezza dell'ONU si è dichiarato favorevole all'attuazione integrale del Protocollo di Lusaka e si è detto disposto a cooperare con tutte le parti in conflitto per raggiungere tale obiettivo, nonché ad avviare consultazioni con il governo dell'Angola per ricercare i mezzi con cui modificare le sanzioni imposte all'UNITA attraverso la risoluzione 1127 (1997), e ciò al fine di agevolare i colloqui di pace,"
---
# legal_t5_small_trans_it_es_small_finetuned model
Model on translating legal text from Italian to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Spanish.
### How to use
Here is how to use this model to translate legal text from Italian to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_es_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "considerando che il 28 marzo 2002 il Consiglio di sicurezza dell'ONU si è dichiarato favorevole all'attuazione integrale del Protocollo di Lusaka e si è detto disposto a cooperare con tutte le parti in conflitto per raggiungere tale obiettivo, nonché ad avviare consultazioni con il governo dell'Angola per ricercare i mezzi con cui modificare le sanzioni imposte all'UNITA attraverso la risoluzione 1127 (1997), e ciò al fine di agevolare i colloqui di pace,"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_es_small_finetuned | 49.083|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_sv_small_finetuned | b51e123dda74d433ba5aed597840a730baef551a | 2021-06-23T10:04:50.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Swedish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_it_sv_small_finetuned | 2 | null | transformers | 23,432 |
---
language: Italian Swedish
tags:
- translation Italian Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Cooperazione rafforzata Annuncio in Aula"
---
# legal_t5_small_trans_it_sv_small_finetuned model
Model on translating legal text from Italian to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Swedish.
### How to use
Here is how to use this model to translate legal text from Italian to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_sv_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Cooperazione rafforzata Annuncio in Aula"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_sv_small_finetuned | 41.243|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_cs_small_finetuned | a30f519aaf5515934c2a14428e0e25d3938b41d0 | 2021-06-23T10:06:06.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Cszech model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_sv_cs_small_finetuned | 2 | null | transformers | 23,433 |
---
language: Swedish Cszech
tags:
- translation Swedish Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Kommissionens personal och extern personal som bemyndigas av kommissionen måste få tillträde till bidragsmottagarens lokaler och tillgång till all information som behövs för att genomföra sådana revisioner, inbegripet information i elektronisk form."
---
# legal_t5_small_trans_sv_cs_small_finetuned model
Model on translating legal text from Swedish to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Cszech.
### How to use
Here is how to use this model to translate legal text from Swedish to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_cs_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Kommissionens personal och extern personal som bemyndigas av kommissionen måste få tillträde till bidragsmottagarens lokaler och tillgång till all information som behövs för att genomföra sådana revisioner, inbegripet information i elektronisk form."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_cs_small_finetuned | 45.472|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_en_small_finetuned | 9450de8e0d6bc36d4451106d00996779281ce624 | 2021-06-23T10:08:47.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish English model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_sv_en_small_finetuned | 2 | null | transformers | 23,434 |
---
language: Swedish English
tags:
- translation Swedish English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Alejo Vidal-Quadras : 262 röster"
---
# legal_t5_small_trans_sv_en_small_finetuned model
Model on translating legal text from Swedish to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to English.
### How to use
Here is how to use this model to translate legal text from Swedish to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_en_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Alejo Vidal-Quadras : 262 röster"
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_en_small_finetuned | 52.084|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_sv_es_small_finetuned | 35bbeaf617300897b08985142b7e698c2af71992 | 2021-06-23T10:09:55.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Spanish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_sv_es_small_finetuned | 2 | null | transformers | 23,435 |
---
language: Swedish Spanish
tags:
- translation Swedish Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "– med beaktande av kommissionen vitbok om idrott ( KOM(2007)0391 ),"
---
# legal_t5_small_trans_sv_es_small_finetuned model
Model on translating legal text from Swedish to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Spanish.
### How to use
Here is how to use this model to translate legal text from Swedish to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_es_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "– med beaktande av kommissionen vitbok om idrott ( KOM(2007)0391 ),"
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_es_small_finetuned | 47.411|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SJSui/NekuBot | e87de0affeb71312ed688cf2272bb1f3cb31e1a8 | 2021-11-17T03:16:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | SJSui | null | SJSui/NekuBot | 2 | null | transformers | 23,436 | Entry not found |
SPGT/LiveSafe-DialoGPT | ea1aacc2ec9413a511cb4899262963fdedf6498d | 2021-10-19T03:47:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | SPGT | null | SPGT/LiveSafe-DialoGPT | 2 | null | transformers | 23,437 | ---
tags:
- conversational
license: mit
---
## LiveSafe chatbot response generation model based on DialogGPT
|
Salma-2/DialoGPT-small-harrypotter | cb9f8be77a5d3e4ffcac56796507418e1ad399e2 | 2021-09-25T20:46:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Salma-2 | null | Salma-2/DialoGPT-small-harrypotter | 2 | null | transformers | 23,438 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
Santiagot1105/wav2vec2-lar-xlsr-finetune-es-col | 916efc20e0fa8b9ea0c692da2216b388256ba5db | 2022-02-22T06:32:15.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Santiagot1105 | null | Santiagot1105/wav2vec2-lar-xlsr-finetune-es-col | 2 | null | transformers | 23,439 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-lar-xlsr-finetune-es-col
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-lar-xlsr-finetune-es-col
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1669
- Wer: 0.2595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1108 | 8.51 | 400 | 0.5936 | 0.6085 |
| 0.3015 | 17.02 | 800 | 0.2071 | 0.2941 |
| 0.0989 | 25.53 | 1200 | 0.1669 | 0.2595 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Santiagot1105/wav2vec2-large-xlsr-finetune-spanish-col | 8af0447cf9e5afda5af898d651501c3f3f7b313e | 2022-01-29T17:54:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Santiagot1105 | null | Santiagot1105/wav2vec2-large-xlsr-finetune-spanish-col | 2 | null | transformers | 23,440 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-finetune-spanish-col
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-finetune-spanish-col
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-spanish](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7105
- Wer: 0.9824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.2829 | 3.25 | 400 | 2.9632 | 1.0 |
| 2.9664 | 6.5 | 800 | 2.8494 | 1.0542 |
| 2.8353 | 9.76 | 1200 | 2.8352 | 1.0101 |
| 2.7863 | 13.01 | 1600 | 2.7421 | 0.9837 |
| 2.762 | 16.26 | 2000 | 2.7254 | 0.9861 |
| 2.7483 | 19.51 | 2400 | 2.7228 | 0.9874 |
| 2.7482 | 22.76 | 2800 | 2.7228 | 0.9999 |
| 2.7373 | 26.02 | 3200 | 2.7163 | 0.9824 |
| 2.7328 | 29.27 | 3600 | 2.7105 | 0.9824 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
SauravMaheshkar/bert-multi-cased-finedtuned-xquad-chaii | abff101fa6e207749ac74451d416ac41585cd56a | 2021-10-13T17:56:01.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/bert-multi-cased-finedtuned-xquad-chaii | 2 | null | transformers | 23,441 | Entry not found |
SauravMaheshkar/bert-multi-cased-finetuned-chaii | c34ccf34ef2c970f8d27699ab98166f4ea3b669b | 2021-10-13T13:32:23.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/bert-multi-cased-finetuned-chaii | 2 | null | transformers | 23,442 | Entry not found |
SauravMaheshkar/bert-multi-uncased-finetuned-chaii | 01da506e18cd6468d5b2629cc96db00348a0891d | 2021-10-13T14:10:14.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/bert-multi-uncased-finetuned-chaii | 2 | null | transformers | 23,443 | Entry not found |
SauravMaheshkar/clr-finetuned-albert-base | 058d7e87ef37ce5de61720b7346e2ca4168d5c2e | 2021-09-23T15:57:32.000Z | [
"pytorch",
"albert",
"fill-mask",
"dataset:Commonlit-Readibility",
"transformers",
"kaggle",
"license:cc0-1.0",
"autotrain_compatible"
] | fill-mask | false | SauravMaheshkar | null | SauravMaheshkar/clr-finetuned-albert-base | 2 | null | transformers | 23,444 | ---
thumbnail: https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true
tags:
- kaggle
license: cc0-1.0
datasets:
- Commonlit-Readibility
---

# FineTuning
| **Architecture** | **Weights** | **Training Loss** | **Validation Loss** |
|:-----------------------:|:---------------:|:----------------:|:----------------------:|
| roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** |
| bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 |
| albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 |
| xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 |
| bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 |
| albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 |
| roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
|
SauravMaheshkar/clr-finetuned-albert-large | 0ff3ede7b70c123248dc8e69286cb58856c86fb4 | 2021-09-23T15:57:34.000Z | [
"pytorch",
"albert",
"fill-mask",
"dataset:Commonlit-Readibility",
"transformers",
"kaggle",
"license:cc0-1.0",
"autotrain_compatible"
] | fill-mask | false | SauravMaheshkar | null | SauravMaheshkar/clr-finetuned-albert-large | 2 | null | transformers | 23,445 | ---
thumbnail: https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true
tags:
- kaggle
license: cc0-1.0
datasets:
- Commonlit-Readibility
---

# FineTuning
| **Architecture** | **Weights** | **Training Loss** | **Validation Loss** |
|:-----------------------:|:---------------:|:----------------:|:----------------------:|
| roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** |
| bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 |
| albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 |
| xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 |
| bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 |
| albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 |
| roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
|
SauravMaheshkar/clr-finetuned-roberta-large | c6572a310087c23a7f44a74dd78887a1e0966a27 | 2021-09-23T15:57:45.000Z | [
"pytorch",
"roberta",
"fill-mask",
"dataset:Commonlit-Readibility",
"transformers",
"kaggle",
"license:cc0-1.0",
"autotrain_compatible"
] | fill-mask | false | SauravMaheshkar | null | SauravMaheshkar/clr-finetuned-roberta-large | 2 | null | transformers | 23,446 | ---
thumbnail: https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true
tags:
- kaggle
license: cc0-1.0
datasets:
- Commonlit-Readibility
---

# FineTuning
| **Architecture** | **Weights** | **Training Loss** | **Validation Loss** |
|:-----------------------:|:---------------:|:----------------:|:----------------------:|
| roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** |
| bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 |
| albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 |
| xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 |
| bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 |
| albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 |
| roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
|
SauravMaheshkar/clr-pretrained-electra-large | fbed12c478a9a6904bc8e56302447f3402c2c28d | 2021-09-23T15:58:01.000Z | [
"pytorch",
"electra",
"pretraining",
"dataset:Commonlit-Readibility",
"transformers",
"kaggle",
"license:cc0-1.0"
] | null | false | SauravMaheshkar | null | SauravMaheshkar/clr-pretrained-electra-large | 2 | null | transformers | 23,447 | ---
thumbnail: https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true
tags:
- kaggle
license: cc0-1.0
datasets:
- Commonlit-Readibility
metrics:
- Perplexity
---

# PreTraining
| **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** |
|:-----------------------:|:---------------:|:----------------:|:----------------------:|
| roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** |
| bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 |
| electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 |
| albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 |
| electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 |
| electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 |
| distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
|
SauravMaheshkar/distilbert-base-cased-distilled-chaii | 78fb672bc73223b1d77ae34c1a81c1611d74d2e0 | 2021-10-14T13:25:17.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/distilbert-base-cased-distilled-chaii | 2 | null | transformers | 23,448 | Entry not found |
SauravMaheshkar/distilbert-multi-finetuned-for-xqua-on-chaii | 5a9ee50cc389cae86a30143346bd2418620d0b6c | 2021-10-13T17:33:44.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/distilbert-multi-finetuned-for-xqua-on-chaii | 2 | null | transformers | 23,449 | Entry not found |
Saz/DialoGPT-small-paimon | 99464bb4afcb45ecfbb005c3d285277fd223e1e5 | 2021-10-07T08:22:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Saz | null | Saz/DialoGPT-small-paimon | 2 | null | transformers | 23,450 | ---
tags:
- conversational
---
# Paimon DialoGPT Model
|
ScottaStrong/DialogGPT-medium-Scott | bf5e95eec284fcb9b283ae459d107903ef308acc | 2021-06-17T03:59:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | ScottaStrong | null | ScottaStrong/DialogGPT-medium-Scott | 2 | null | transformers | 23,451 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot)
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("scottastrong/DialogGPT-medium-Scott")
model = AutoModelWithLMHead.from_pretrained("scottastrong/DialogGPT-medium-Scott")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
ScottaStrong/DialogGPT-small-Scott | dedc1586d9d60ac55ccf19209f36c73124a37f97 | 2021-06-17T04:11:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | ScottaStrong | null | ScottaStrong/DialogGPT-small-Scott | 2 | null | transformers | 23,452 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-small) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot)
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("scottastrong/DialogGPT-small-Scott")
model = AutoModelWithLMHead.from_pretrained("scottastrong/DialogGPT-small-Scott")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
Shauli/RE-metric-model-siamese-spike | 13af87fa279aef6051eb67b3a556f85c56a0880f | 2021-05-18T22:34:51.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Shauli | null | Shauli/RE-metric-model-siamese-spike | 2 | null | transformers | 23,453 | Entry not found |
ShengdingHu/adapter_t5-base_superglue-multirc | a85611bb43a684b2b4b787365a0aa9d94274efa0 | 2022-01-31T17:41:12.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_t5-base_superglue-multirc | 2 | null | transformers | 23,454 | Entry not found |
ShengdingHu/bitfit_t5-base_stsb | c7ffc7180d88054532ec54801b11954f088a0016 | 2022-01-31T12:58:34.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_t5-base_stsb | 2 | null | transformers | 23,455 | Entry not found |
ShengdingHu/lora_roberta-base_rte | bc3b3a0cd979390192aa8379526026295fe8bafd | 2022-01-15T06:49:00.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/lora_roberta-base_rte | 2 | null | transformers | 23,456 | Entry not found |
ShengdingHu/lora_t5-base_stsb | 8e5855ebd493f6622a2692add6e89303dd908110 | 2022-02-02T08:18:07.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/lora_t5-base_stsb | 2 | null | transformers | 23,457 | Entry not found |
ShengdingHu/superglue-multirc | e379497f2ccf7252e50c768cfb848f897bcbd60d | 2022-02-02T07:53:01.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ShengdingHu | null | ShengdingHu/superglue-multirc | 2 | null | transformers | 23,458 | Entry not found |
ShengdingHu/test_delta_model | ab13f5144bb9841efa1c7dc7fb1f32be1ba5d289 | 2022-02-07T04:55:33.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/test_delta_model | 2 | null | transformers | 23,459 | Entry not found |
Shinx/DialoGPT-medium-myheroacademia | 9cf774fada8d76d8c073775f2a0d6b222e04795e | 2022-01-05T18:46:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Shinx | null | Shinx/DialoGPT-medium-myheroacademia | 2 | null | transformers | 23,460 | ---
tags:
- conversational
---
# My Hero Academia DialoGPT Model |
Shushant/NepNewsBERT | 698130bf4145290c84d15f0048f57c876ba56cf2 | 2021-12-14T06:44:31.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Shushant | null | Shushant/NepNewsBERT | 2 | null | transformers | 23,461 | # NepNewsBERT
## Masked Language Model for nepali language trained on nepali news scrapped from different nepali news website. The data set contained about 10 million of nepali sentences mainly related to nepali news.
## Usage
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("Shushant/NepNewsBERT")
model = AutoModelForMaskedLM.from_pretrained("Shushant/NepNewsBERT")
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer,
)
from pprint import pprint
pprint(fill_mask(f"तिमीलाई कस्तो {tokenizer.mask_token}.")) |
SilentMyuth/stable-jenny | 7d562336d2e9a5d60e6494f2dbb0456d59061f36 | 2021-08-27T19:31:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | SilentMyuth | null | SilentMyuth/stable-jenny | 2 | null | transformers | 23,462 | Entry not found |
SirBastianXVII/DialoGPT-small-TVD | 7496da588a8fe2cc0e49f0ff4daf857699c0eb60 | 2021-09-27T14:51:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | SirBastianXVII | null | SirBastianXVII/DialoGPT-small-TVD | 2 | null | transformers | 23,463 | ---
tags:
- conversational
---
# The Vampire Diaries DialoGPT Model |
SoLID/sgd-t5-tod | 6ff45e2e401a694c2b8212c9c20fbef194e492fb | 2022-03-01T02:58:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"eng",
"dataset:schema guided dialogue",
"transformers",
"dialogue",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | SoLID | null | SoLID/sgd-t5-tod | 2 | null | transformers | 23,464 | ---
language:
- eng
thumbnail: "https://townsquare.media/site/88/files/2020/06/C_Charlotte_RGB_7484.jpg"
tags:
- dialogue
license: afl-3.0
datasets:
- schema guided dialogue
metrics:
- exactness
---
Hyperparameters: 1 epoch, max_len_dict including domain classification task, and 1e-5 learning rate |
SonMooSans/test | 47092df4b33e51659448094902bad9c25d4e3661 | 2021-12-19T14:29:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | SonMooSans | null | SonMooSans/test | 2 | null | transformers | 23,465 | ---
tags:
- conversational
---
# My Awesome Model |
SophieTr/fine-tune-Pegasus | f9929bff7084d06b136a494cbb2d6ed77bbf801e | 2021-12-30T11:42:28.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SophieTr | null | SophieTr/fine-tune-Pegasus | 2 | null | transformers | 23,466 | Entry not found |
Sourabh714/distilbert-base-uncased-finetuned-squad | 2a1c9dd3bcd39cfdd2d505c5cc22eb959036d8e9 | 2022-02-15T20:47:37.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Sourabh714 | null | Sourabh714/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 23,467 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2188 | 1.0 | 5533 | 1.1708 |
| 0.9519 | 2.0 | 11066 | 1.1058 |
| 0.7576 | 3.0 | 16599 | 1.1573 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
StephennFernandes/XLS-R-marathi | 83c6664e6bde2ef0d30af1c04581d5204621747b | 2022-03-24T11:55:17.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mr",
"transformers",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | StephennFernandes | null | StephennFernandes/XLS-R-marathi | 2 | null | transformers | 23,468 | ---
language:
- mr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- generated_from_trainer
- hf-asr-leaderboard
model-index:
- name: XLS-R-marathi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-marathi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1200
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
StephennFernandes/wav2vec2-XLS-R-300m-konkani | c786d8e3f9588481d44fdf067f16da59cb609f56 | 2022-02-08T21:33:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | StephennFernandes | null | StephennFernandes/wav2vec2-XLS-R-300m-konkani | 2 | null | transformers | 23,469 |
tags:
- automatic-speech-recognition
- robust-speech-event
---
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on a private dataset.
It achieves the following results on the evaluation set:
The following hyper-parameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 30
- mixed_precision_training: Native AMP
|
SteveC/sdc_bot_small | 85d156a7ca269da5686b8f1471c78e015fa3a387 | 2022-02-10T01:58:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | SteveC | null | SteveC/sdc_bot_small | 2 | 1 | transformers | 23,470 | Entry not found |
SteveC/sdc_bot_two_step | 32d3ac3032a2a1d2058b0126d895735e6ced960e | 2022-02-22T03:22:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | SteveC | null | SteveC/sdc_bot_two_step | 2 | null | transformers | 23,471 | Entry not found |
Subhashini17/wav2vec2-large-xls-r-300m-ta-colab-new1 | 6683028de34abb999ad1bcae38445513acf60ff6 | 2022-02-04T11:14:25.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Subhashini17 | null | Subhashini17/wav2vec2-large-xls-r-300m-ta-colab-new1 | 2 | null | transformers | 23,472 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-ta-colab-new1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ta-colab-new1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6642
- eval_wer: 0.7611
- eval_runtime: 152.4412
- eval_samples_per_second: 11.683
- eval_steps_per_second: 1.463
- epoch: 10.11
- step: 960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Subhashini17/wav2vec2-large-xls-r-300m-ta-colab | c7f0b5b805e9d3ff71796fbe4906114880207e6b | 2022-02-17T04:36:51.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Subhashini17 | null | Subhashini17/wav2vec2-large-xls-r-300m-ta-colab | 2 | null | transformers | 23,473 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-ta-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ta-colab
This model is a fine-tuned version of [akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab-final](https://huggingface.co/akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab-final) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Sunbird/sunbird-en-lg | ea6097f1f16104d6f6b792c8accd57b5b17c28fd | 2021-10-04T13:58:43.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Sunbird | null | Sunbird/sunbird-en-lg | 2 | 1 | transformers | 23,474 | English to Luganda text translation |
SuperAI2-Machima/mt5-small-thai-yes-no-qg | 5ce07c8ebfadf2ad96787bfd4c3ff2d219d62c98 | 2022-02-23T12:28:01.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"thai",
"th",
"dataset:NSC2018",
"dataset:wiki-documents-nsc",
"dataset:ThaiQACorpus-DevelopmentDataset",
"transformers",
"Yes No question-generation",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | SuperAI2-Machima | null | SuperAI2-Machima/mt5-small-thai-yes-no-qg | 2 | null | transformers | 23,475 | ---
tags:
- Yes No question-generation
language:
- thai
- th
datasets:
- NSC2018
- wiki-documents-nsc
- ThaiQACorpus-DevelopmentDataset
widget:
- text: "วันที่ 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น"
example_title: "Example 01"
- text: "พลเอก ประยุทธ์ จันทร์โอชา (เกิด 21 มีนาคม พ.ศ. 2497) ชื่อเล่น ตู่ เป็นนักการเมืองและอดีตนายทหารบกชาวไทย"
example_title: "Example 02"
license: mit
---
[SuperAI Engineer Season 2](https://superai.aiat.or.th/) , [Machima](https://machchima.superai.me/)
[Google's mT5](https://github.com/google-research/multilingual-t5) , [Pollawat](https://huggingface.co/Pollawat/mt5-small-thai-qg)
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config
model = T5ForConditionalGeneration.from_pretrained('SuperAI2-Machima/mt5-small-thai-yes-no-qg')
tokenizer = T5Tokenizer.from_pretrained('SuperAI2-Machima/mt5-small-thai-yes-no-qg')
source_text = 'บุกยึดไม้เถื่อน อดีต ส.ส.บุรีรัมย์ เตรียมสร้างคฤหาสน์ทรงไทย 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น'
print('Predicted Summary Text : ')
tokenized_text = tokenizer.encode(source_text, return_tensors="pt").to(device)
summary_ids = model.generate(tokenized_text,
num_beams=4,
no_repeat_ngram_size=2,
max_length=50,
early_stopping=True)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
#Predicted Summary Text :
#answer: 80 แผ่น question: ตํารวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่ากี่แผ่น
``` |
SupriyaArun/distilbert-base-uncased-finetuned-squad | 566d7be9df7fba694d7222a4b2c83bc77aa368e9 | 2021-12-10T19:20:59.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | SupriyaArun | null | SupriyaArun/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 23,476 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2213 | 1.0 | 5533 | 1.1560 |
| 0.943 | 2.0 | 11066 | 1.1227 |
| 0.7633 | 3.0 | 16599 | 1.1569 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
SurferSEO/bleurt | 12f55175e9ba67b2a93f81ac9cfc3df827228ac0 | 2021-09-21T12:51:44.000Z | [
"pytorch",
"bert",
"en",
"arxiv:2004.04696",
"transformers",
"license:apache-2.0"
] | null | false | SurferSEO | null | SurferSEO/bleurt | 2 | null | transformers | 23,477 | ---
language: en
license: apache-2.0
---
# BLEURT
Pretrained model on English language. It was introduced in
[this paper](https://arxiv.org/pdf/2004.04696.pdf), described in [this blogpost](https://ai.googleblog.com/2020/05/evaluating-natural-language-generation.html) and first released in
[this repository](https://github.com/google-research/bleurt).
The team releasing BLEURT did not write a model card for this model so this model card has been written by
the Surfer team.
Original TensorFlow implementation has been converted to PyTorch with help of [this article](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) by Surfer team.
Visit us at [surferseo.com](https://surferseo.com).
### How to use
Since BLEURT is not implemented in transformers library yet, you have to import BleurtModel from bleurt_model.py
```python
import torch
from bleurt_model import BleurtModel
from transformers import BertTokenizerFast
model = BleurtModel.from_pretrained("SurferSEO/bleurt")
tokenizer = BertTokenizerFast.from_pretrained("SurferSEO/bleurt")
sentence_pairs = [("I love surfing.", "I'd like to surf.")]
encoded = tokenizer(sentence_pairs, padding=True, truncation=True, return_tensors="pt")
input_ids, attention_mask, token_type_ids = (
encoded["input_ids"],
encoded["attention_mask"],
encoded["token_type_ids"],
)
with torch.set_grad_enabled(False):
predictions = model(
input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
)
print(predictions)
``` |
T-Systems-onsite/cross-de-it-roberta-sentence-transformer | 99d857f8c82886afac315499dc82ef6cc2dc8241 | 2021-04-06T06:06:02.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-de-it-roberta-sentence-transformer | 2 | null | transformers | 23,478 | Entry not found |
T-Systems-onsite/cross-de-nl-roberta-sentence-transformer | 4e8c2b9bdc81ad4a584b720d78b8b9c86ec20956 | 2021-04-06T06:23:39.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-de-nl-roberta-sentence-transformer | 2 | null | transformers | 23,479 | Entry not found |
T-Systems-onsite/cross-de-pl-roberta-sentence-transformer | 0db4ed37cd621fd450d0c28752b20486dbe5e034 | 2021-04-06T06:38:22.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-de-pl-roberta-sentence-transformer | 2 | null | transformers | 23,480 | Entry not found |
T-Systems-onsite/cross-de-pt-roberta-sentence-transformer | 37ab40c84b52a0b6bd5a3531028dad55e8f58b54 | 2021-04-06T07:08:02.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-de-pt-roberta-sentence-transformer | 2 | null | transformers | 23,481 | Entry not found |
T-Systems-onsite/cross-en-de-fr-roberta-sentence-transformer | 4ea15afde66a053b66ad45d53256b965ad1fedf3 | 2020-12-30T06:27:04.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-en-de-fr-roberta-sentence-transformer | 2 | null | transformers | 23,482 | Entry not found |
T-Systems-onsite/cross-en-de-pl-roberta-sentence-transformer | d878aeab016634d3c7a20a5272a3ec772368d682 | 2020-12-29T06:50:44.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-en-de-pl-roberta-sentence-transformer | 2 | null | transformers | 23,483 | Entry not found |
T-Systems-onsite/cross-en-de-pt-roberta-sentence-transformer | cf9e7afad15dd67956eac528c1afee402dd968dd | 2021-01-01T08:48:16.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-en-de-pt-roberta-sentence-transformer | 2 | null | transformers | 23,484 | Entry not found |
T-Systems-onsite/cross-en-de-ru-roberta-sentence-transformer | 1f6764dd26200d38f67bcf288f6c76ac21a41fcf | 2021-01-01T10:26:38.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-en-de-ru-roberta-sentence-transformer | 2 | null | transformers | 23,485 | Entry not found |
T-Systems-onsite/cross-en-es-roberta-sentence-transformer | 2df516ab3437227d91c462e236690b88ef83bcc8 | 2021-04-06T15:11:30.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-en-es-roberta-sentence-transformer | 2 | null | transformers | 23,486 | Entry not found |
T-Systems-onsite/cross-en-fr-it-roberta-sentence-transformer | 5853f5bd8b356099ba6a790ff585491786ad31e8 | 2020-12-29T07:16:54.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-en-fr-it-roberta-sentence-transformer | 2 | null | transformers | 23,487 | Entry not found |
T-Systems-onsite/cross-en-nl-it-roberta-sentence-transformer | dba8dd9686d130daad3609cbef2b52eef5649140 | 2021-01-02T08:49:58.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-en-nl-it-roberta-sentence-transformer | 2 | null | transformers | 23,488 | Entry not found |
T-Systems-onsite/cross-en-ru-roberta-sentence-transformer | 9825aaf03cb924011d81e2db5a100f4c7d132484 | 2021-04-06T19:23:55.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-en-ru-roberta-sentence-transformer | 2 | null | transformers | 23,489 | Entry not found |
T-Systems-onsite/cross-en-zh-roberta-sentence-transformer | 29404414f8e8eba805230c3b69492ddb6f4f8389 | 2021-04-06T19:34:35.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-en-zh-roberta-sentence-transformer | 2 | null | transformers | 23,490 | Entry not found |
Teepika/Sentence-Transformer-Check | 81926965b58d7bf33bb8c34554c5f1bc8a86639e | 2021-10-23T19:05:25.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Teepika | null | Teepika/Sentence-Transformer-Check | 2 | null | transformers | 23,491 | Entry not found |
Teepika/dummy-model | 7c5f3a98cb180f28a770138352c98af424c4be91 | 2021-10-21T23:54:49.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Teepika | null | Teepika/dummy-model | 2 | null | transformers | 23,492 | Entry not found |
Temur/wav2vec2-Georgian-Daytona | 087726f2a8c7cae266df87fb655a294ab016b078 | 2021-07-05T17:45:09.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ka",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Temur | null | Temur/wav2vec2-Georgian-Daytona | 2 | null | transformers | 23,493 | ---
language: ka
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Georgian WAV2VEC2 Daytona
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ka
type: common_voice
args: ka
metrics:
- name: Test WER
type: wer
value: 48.34
---
# Wav2Vec2-Large-XLSR-53-Georgian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Georgian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ka", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Temur/wav2vec2-Georgian-Daytona")
model = Wav2Vec2ForCTC.from_pretrained("Temur/wav2vec2-Georgian-Daytona")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Georgian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ka", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Temur/wav2vec2-Georgian-Daytona")
model = Wav2Vec2ForCTC.from_pretrained("Temur/wav2vec2-Georgian-Daytona")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\\tpred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 48.34 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.
The script used for training can be found [here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/FINE_TUNE_XLSR_WAV2VEC2.md) |
TerenceAmil/bert_cn_finetuning | b6cc11c413cdb4ec17e8a26cca77c24d120b5a9d | 2021-09-04T05:45:10.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | TerenceAmil | null | TerenceAmil/bert_cn_finetuning | 2 | null | transformers | 23,494 | Entry not found |
Thanish/wav2vec2-large-xlsr-tamil | b2e1af25a77feefd910a1ec98a1b09b75f5d432e | 2021-07-05T17:50:46.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ta",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Thanish | null | Thanish/wav2vec2-large-xlsr-tamil | 2 | null | transformers | 23,495 | ---
language: ta
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: thanish wav2vec2-large-xlsr-tamil
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ta
type: common_voice
args: ta
metrics:
- name: Test WER
type: wer
value: 100.00
---
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "{lang_id}", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Tamil test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "{lang_id}", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\\tpred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 100.00 %
## Training
The Common Voice `train`, `validation` were used for training
The script used for training can be found [https://colab.research.google.com/drive/1PC2SjxpcWMQ2qmRw21NbP38wtQQUa5os#scrollTo=YKBZdqqJG9Tv](...) |
TheDiamondKing/DialoGPT-small-harrypotter | 516199399b197f49a587bb5a935db8b802139bd2 | 2022-02-07T14:13:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | TheDiamondKing | null | TheDiamondKing/DialoGPT-small-harrypotter | 2 | null | transformers | 23,496 | ---
tags:
- conversational
---
# A Talking AI made with GPT2 trained with Harry Potter transcripts
## Currently working on Text to speech and speech recognition
## Likes to say "i'm not a wizard" |
TheGeeKing/DialoGPT-small-Rick | 31b50e8a4af097a7bf8f03d339245ed9c6609c56 | 2021-06-03T19:38:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | TheGeeKing | null | TheGeeKing/DialoGPT-small-Rick | 2 | null | transformers | 23,497 | Entry not found |
TheLongSentance/MIMIC-III-t5-large-v1 | 506e376a858e44aea949bf6954d6019ba7bcadba | 2021-08-24T07:23:30.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers"
] | feature-extraction | false | TheLongSentance | null | TheLongSentance/MIMIC-III-t5-large-v1 | 2 | null | transformers | 23,498 | Entry not found |
TheLongSentance/t5_mimic_final_chkpnt10000 | e2317818c2c529975ae9874c6900e37ae42d51ed | 2021-09-16T10:43:36.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers"
] | feature-extraction | false | TheLongSentance | null | TheLongSentance/t5_mimic_final_chkpnt10000 | 2 | null | transformers | 23,499 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.