modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SEBIS/legal_t5_small_multitask_es_cs | 77f3a873bd03e37d16f448684e220ecad71b88df | 2021-06-23T11:01:33.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Spanish Cszech",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Cszech model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_es_cs | 1 | null | transformers | 28,300 |
---
language: Spanish Cszech
tags:
- translation Spanish Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "La política pesquera supone que se tenga en cuenta un gran número de dimensiones – social, medioambiental, económica – lo que exige un enfoque integrado y equilibrado, incompatible con una visión que los sobrestima, en particular, mediante una definición a priori de cualquier jerarquía de prioridades."
---
# legal_t5_small_multitask_es_cs model
Model on translating legal text from Spanish to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_es_cs model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Cszech.
### How to use
Here is how to use this model to translate legal text from Spanish to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "La política pesquera supone que se tenga en cuenta un gran número de dimensiones – social, medioambiental, económica – lo que exige un enfoque integrado y equilibrado, incompatible con una visión que los sobrestima, en particular, mediante una definición a priori de cualquier jerarquía de prioridades."
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_es_cs model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_es_cs | 47.673|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_es_it | 9c8bc6b921a4bfc59c00ab2885e23f333d491fc6 | 2021-06-23T11:04:49.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Spanish Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish Italian model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_es_it | 1 | null | transformers | 28,301 |
---
language: Spanish Italian
tags:
- translation Spanish Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Por el Parlamento Europeo Por el Consejo"
---
# legal_t5_small_multitask_es_it model
Model on translating legal text from Spanish to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_es_it model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to Italian.
### How to use
Here is how to use this model to translate legal text from Spanish to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_es_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_es_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Por el Parlamento Europeo Por el Consejo"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_es_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_es_it | 37.386|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_fr_es | 036b2da5629a7b9362f9d3b275e9ca18c472576a | 2021-06-23T11:10:42.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French Spanish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_fr_es | 1 | null | transformers | 28,302 |
---
language: French Spanish
tags:
- translation French Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "+ lettre autorités suédoises"
---
# legal_t5_small_multitask_fr_es model
Model on translating legal text from French to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_fr_es model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from French to Spanish.
### How to use
Here is how to use this model to translate legal text from French to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_fr_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_fr_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "+ lettre autorités suédoises"
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_fr_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_fr_es | 43.807|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_it_es | 008314e1e532699ac7ff3035e04514df1e8c2c7b | 2021-06-23T11:14:49.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Spanish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_it_es | 1 | null | transformers | 28,303 |
---
language: Italian Spanish
tags:
- translation Italian Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Interrogazione con richiesta di risposta scritta E-005808/2011"
---
# legal_t5_small_multitask_it_es model
Model on translating legal text from Italian to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_it_es model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Spanish.
### How to use
Here is how to use this model to translate legal text from Italian to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Interrogazione con richiesta di risposta scritta E-005808/2011"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_it_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_it_es | 36.980|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_it_sv | 404f2729e8bf5ecf47abcf590376384dc31cc0e6 | 2021-06-23T11:16:13.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian Swedish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_it_sv | 1 | null | transformers | 28,304 |
---
language: Italian Swedish
tags:
- translation Italian Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Può il Commissario responsabile comunicare al Parlamento in che modo la DG Ricerca garantirà che l’Europa possa svolgere un ruolo di primo piano in questo sforzo globale di ricerca sul diabete?"
---
# legal_t5_small_multitask_it_sv model
Model on translating legal text from Italian to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_it_sv model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to Swedish.
### How to use
Here is how to use this model to translate legal text from Italian to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_it_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_it_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Può il Commissario responsabile comunicare al Parlamento in che modo la DG Ricerca garantirà che l’Europa possa svolgere un ruolo di primo piano in questo sforzo globale di ricerca sul diabete?"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_it_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_it_sv | 41.523|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_sv_es | 36f7114aa0aab1e68ce8aad1c7be859eefbaade6 | 2021-06-23T11:18:54.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish Spanish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Spanish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_sv_es | 1 | null | transformers | 28,305 |
---
language: Swedish Spanish
tags:
- translation Swedish Spanish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "med beaktande av sin resolution av den 14 april 2005 om torkan i Portugal,"
---
# legal_t5_small_multitask_sv_es model
Model on translating legal text from Swedish to Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_sv_es model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Spanish.
### How to use
Here is how to use this model to translate legal text from Swedish to Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_sv_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_sv_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "med beaktande av sin resolution av den 14 april 2005 om torkan i Portugal,"
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_sv_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_sv_es | 35.506|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_summ_multitask_es | 10543fc3c597c020fbe5a198499aa397f9ad6fae | 2021-06-23T11:26:19.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_summ_multitask_es | 1 | null | transformers | 28,306 | Entry not found |
SEBIS/legal_t5_small_summ_multitask_fr | 5ac6a258af4bb1593d44f79420059bf94bb11825 | 2021-06-23T11:26:56.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_summ_multitask_fr | 1 | null | transformers | 28,307 | Entry not found |
SEBIS/legal_t5_small_summ_multitask_it | a4890125e467733a93806354d9b33ee20d5e821b | 2021-06-23T11:27:32.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_summ_multitask_it | 1 | null | transformers | 28,308 | Entry not found |
SEBIS/legal_t5_small_summ_sv | e9bb9bd30f9a87f661fdc8708ef57b44215b0d8f | 2021-06-23T11:28:45.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish",
"dataset:jrc-acquis",
"transformers",
"summarization Swedish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_summ_sv | 1 | null | transformers | 28,309 |
---
language: Swedish
tags:
- summarization Swedish model
datasets:
- jrc-acquis
widget:
- text: "EUROPEISKA GEMENSKAPERNAS RÅD HAR ANTAGIT DENNA FÖRORDNING med beaktande av Fördraget om upprättandet av Europeiska ekonomiska gemenskapen, särskilt artiklarna 43 och 100a i detta, med beaktande av kommissionens förslag(1), i samarbete med Europaparlamentet(2), med beaktande av Ekonomiska och sociala kommitténs yttrande(3), och med beaktande av följande: Det bör införas förbud mot användning av blybaserade kapsyler eller blybaserad folie i förslutningar på förpackningar som används då aromatiserade viner, aromatiserade vinbaserade drycker och aromatiserade drinkar baserade på vinprodukter släpps ut på marknaden i syfte att undvika risken för kontaminering, särskilt vid oavsiktlig kontakt med sådana produkter, samt risken för miljöförorening på grund av avfall som innehåller bly från kapsyler och folie av detta slag. Tillverkarna och användarna av kapsylerna och folien i fråga bör dock ges tid att anpassa sig genom att förbudet inte tillämpas förrän från och med den 1 januari 1993. Det är även nödvändigt att tillåta att produkter som före detta datum tappats på buteljer med blybaserade kapsyler eller blybaserad folie får säljas till dess att lagren är uttömda. Vissa definitioner av aromatiserade vinbaserade drycker bör anpassas så att större hänsyn tas till traditionella framställningsmetoder. Förordning (EEG) nr 1601/91(4) bör därför ändras. HÄRIGENOM FÖRESKRIVS FÖLJANDE. Artikel 1 Förordning (EEG) nr 1601/91 ändras på följande sätt: 1. Artikel 2.3 a första stycket skall ersättas med följande: %quot%a) Sangria: en dryck som framställs av vin - som smaksatts genom tillsats av naturliga extrakt eller essenser av citrusfrukt, - med eller utan saft av sådan frukt, - eventuellt: - med tillsats av kryddor, - sötat, - med tillsats av CO2, och med en slutlig alkoholstyrka på under 12 volymprocent.%quot% 2. Artikel 2.3 e skall ersättas med följande: %quot%e) Kalte Ente: Smaksatt vinbaserad dryck som framställs genom att vin, pärlande vin eller pärlande vin med tillsatt CO2 blandas med mousserande vin eller mousserande vin med tillsatt CO2 och tillsätts naturlig citronsubstans eller extrakt av detta som måste ge en tydligt framträdande smak. Slutprodukten måste innehålla minst 25 volymprocent mousserande vin eller mousserande vin med tillsatt CO2.%quot% 3. Följande punkt skall införas i artikel 8: %quot%4.a Från och med den 1 januari 1993 får buteljerade produkter som omfattas av denna förordning inte saluhållas eller släppas ut på marknaden i förpackningar med förslutningar som täckts med blybaserade kapsyler eller blybaserad folie. Dock får produkter som före detta datum tappats på flaskor med detta slag av kapsyler eller folie avyttras till dess att lagren tömts.%quot% Artikel 2 Denna förordning träder i kraft den tredje dagen efter det att den har offentliggjorts i Europeiska gemenskapernas officiella tidning. Denna förordning är till alla delar bindande och direkt tillämplig i alla medlemsstater. Utfärdad i Bryssel den 9 november 1992. På rådets vägnar D. HURD Ordförande (1) EGT nr C 69, 18.3.1992, s. 11. (2) EGT nr C 241, 21.9.1992, s. 97 och beslut av den 28 oktober 1992. (3) EGT nr C 169, 6.7.1992, s. 1. (4) EGT nr L 149, 14.6.1991, s. 1. "
---
# legal_t5_small_summ_sv model
Model for Summarization of legal text written in Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_summ_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for summarization of legal texts written in Swedish.
### How to use
Here is how to use this model to summarize legal text written in Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "EUROPEISKA GEMENSKAPERNAS RÅD HAR ANTAGIT DENNA FÖRORDNING med beaktande av Fördraget om upprättandet av Europeiska ekonomiska gemenskapen, särskilt artiklarna 43 och 100a i detta, med beaktande av kommissionens förslag(1), i samarbete med Europaparlamentet(2), med beaktande av Ekonomiska och sociala kommitténs yttrande(3), och med beaktande av följande: Det bör införas förbud mot användning av blybaserade kapsyler eller blybaserad folie i förslutningar på förpackningar som används då aromatiserade viner, aromatiserade vinbaserade drycker och aromatiserade drinkar baserade på vinprodukter släpps ut på marknaden i syfte att undvika risken för kontaminering, särskilt vid oavsiktlig kontakt med sådana produkter, samt risken för miljöförorening på grund av avfall som innehåller bly från kapsyler och folie av detta slag. Tillverkarna och användarna av kapsylerna och folien i fråga bör dock ges tid att anpassa sig genom att förbudet inte tillämpas förrän från och med den 1 januari 1993. Det är även nödvändigt att tillåta att produkter som före detta datum tappats på buteljer med blybaserade kapsyler eller blybaserad folie får säljas till dess att lagren är uttömda. Vissa definitioner av aromatiserade vinbaserade drycker bör anpassas så att större hänsyn tas till traditionella framställningsmetoder. Förordning (EEG) nr 1601/91(4) bör därför ändras. HÄRIGENOM FÖRESKRIVS FÖLJANDE. Artikel 1 Förordning (EEG) nr 1601/91 ändras på följande sätt: 1. Artikel 2.3 a första stycket skall ersättas med följande: %quot%a) Sangria: en dryck som framställs av vin - som smaksatts genom tillsats av naturliga extrakt eller essenser av citrusfrukt, - med eller utan saft av sådan frukt, - eventuellt: - med tillsats av kryddor, - sötat, - med tillsats av CO2, och med en slutlig alkoholstyrka på under 12 volymprocent.%quot% 2. Artikel 2.3 e skall ersättas med följande: %quot%e) Kalte Ente: Smaksatt vinbaserad dryck som framställs genom att vin, pärlande vin eller pärlande vin med tillsatt CO2 blandas med mousserande vin eller mousserande vin med tillsatt CO2 och tillsätts naturlig citronsubstans eller extrakt av detta som måste ge en tydligt framträdande smak. Slutprodukten måste innehålla minst 25 volymprocent mousserande vin eller mousserande vin med tillsatt CO2.%quot% 3. Följande punkt skall införas i artikel 8: %quot%4.a Från och med den 1 januari 1993 får buteljerade produkter som omfattas av denna förordning inte saluhållas eller släppas ut på marknaden i förpackningar med förslutningar som täckts med blybaserade kapsyler eller blybaserad folie. Dock får produkter som före detta datum tappats på flaskor med detta slag av kapsyler eller folie avyttras till dess att lagren tömts.%quot% Artikel 2 Denna förordning träder i kraft den tredje dagen efter det att den har offentliggjorts i Europeiska gemenskapernas officiella tidning. Denna förordning är till alla delar bindande och direkt tillämplig i alla medlemsstater. Utfärdad i Bryssel den 9 november 1992. På rådets vägnar D. HURD Ordförande (1) EGT nr C 69, 18.3.1992, s. 11. (2) EGT nr C 241, 21.9.1992, s. 97 och beslut av den 28 oktober 1992. (3) EGT nr C 169, 6.7.1992, s. 1. (4) EGT nr L 149, 14.6.1991, s. 1. "
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_summ_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 19 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | Rouge1 | Rouge2 | Rouge Lsum |
|:-----:|:-----:|:-----:|:-----:|
| legal_t5_small_summ_sv | 78.84|69.97 |77.59|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_cs_it | 0708dd677d8e688b3abe91a5820e8c5a4dcc9044 | 2021-06-23T11:35:03.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Cszech Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Italian model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_cs_it | 1 | null | transformers | 28,310 |
---
language: Cszech Italian
tags:
- translation Cszech Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "– Měly by se podporovat normy sportovní správy prostřednictvím výměny osvědčených postupů."
---
# legal_t5_small_trans_cs_it model
Model on translating legal text from Cszech to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_cs_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Italian.
### How to use
Here is how to use this model to translate legal text from Cszech to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_cs_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_cs_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "– Měly by se podporovat normy sportovní správy prostřednictvím výměny osvědčených postupů."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_trans_cs_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_cs_it | 46.67|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_fr_small_finetuned | 6bab4bb73ec0e822e5d6a10e963091772ea0adf7 | 2021-06-23T09:30:54.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch French model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_de_fr_small_finetuned | 1 | null | transformers | 28,311 |
---
language: Deustch French
tags:
- translation Deustch French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "SCHRIFTLICHE ANFRAGE P-0029/06"
---
# legal_t5_small_trans_de_fr_small_finetuned model
Model on translating legal text from Deustch to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to French.
### How to use
Here is how to use this model to translate legal text from Deustch to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_fr_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "SCHRIFTLICHE ANFRAGE P-0029/06"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_fr_small_finetuned | 47.461|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_it | 7270319e19e02c5092a633792a4bb8fb9e80a26f | 2021-06-23T09:31:31.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Italian model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_de_it | 1 | null | transformers | 28,312 |
---
language: Deustch Italian
tags:
- translation Deustch Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Zum Zeitpunkt der Schlussabstimmung anwesende Stellvertreter(innen)"
---
# legal_t5_small_trans_de_it model
Model on translating legal text from Deustch to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Italian.
### How to use
Here is how to use this model to translate legal text from Deustch to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Zum Zeitpunkt der Schlussabstimmung anwesende Stellvertreter(innen)"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_it | 43.3|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_it_small_finetuned | d92554f79f802706ebfbd0cbc6c9e107c132eb7b | 2021-06-23T09:32:07.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Italian model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_de_it_small_finetuned | 1 | null | transformers | 28,313 |
---
language: Deustch Italian
tags:
- translation Deustch Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "sicherstellen, dass alle Bürger gemäß der Richtlinie .../.../EG [über den Universaldienst und Nutzerrechte bei elektronischen Kommunikationsnetzen und -diensten[ zu erschwinglichen Preisen Zugang zum Universaldienst erhalten;"
---
# legal_t5_small_trans_de_it_small_finetuned model
Model on translating legal text from Deustch to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Italian.
### How to use
Here is how to use this model to translate legal text from Deustch to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_it_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "sicherstellen, dass alle Bürger gemäß der Richtlinie .../.../EG [über den Universaldienst und Nutzerrechte bei elektronischen Kommunikationsnetzen und -diensten[ zu erschwinglichen Preisen Zugang zum Universaldienst erhalten;"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_it_small_finetuned | 42.895|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_de_sv | b6f478b92493ae5049cce2ca751ac84acac40fac | 2021-06-23T09:32:41.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch Swedish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_de_sv | 1 | null | transformers | 28,314 |
---
language: Deustch Swedish
tags:
- translation Deustch Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Betrifft: Leader-Programm"
---
# legal_t5_small_trans_de_sv model
Model on translating legal text from Deustch to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Swedish.
### How to use
Here is how to use this model to translate legal text from Deustch to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Betrifft: Leader-Programm"
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_sv | 41.69|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_es_en_small_finetuned | 59c182ac33785fbbd20acc53dbeaf20beb8b5e4a | 2021-06-23T09:45:22.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Spanish English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish English model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_es_en_small_finetuned | 1 | null | transformers | 28,315 |
---
language: Spanish English
tags:
- translation Spanish English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "de Jonas Sjöstedt (GUE/NGL)"
---
# legal_t5_small_trans_es_en_small_finetuned model
Model on translating legal text from Spanish to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_es_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to English.
### How to use
Here is how to use this model to translate legal text from Spanish to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_en_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "de Jonas Sjöstedt (GUE/NGL)"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_trans_es_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_es_en_small_finetuned | 54.481|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_es_fr_small_finetuned | 0ea7d184884ea07580e74f7a1c1772973fbaa183 | 2021-06-23T09:46:42.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Spanish French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Spanish French model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_es_fr_small_finetuned | 1 | null | transformers | 28,316 |
---
language: Spanish French
tags:
- translation Spanish French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Pide a las autoridades eritreas que levanten la prohibición de prensa independiente en el país y que liberen de inmediato a los periodistas independientes y a todos los demás encarcelados por el simple hecho de haber ejercido su derecho a la libertad de expresión;"
---
# legal_t5_small_trans_es_fr_small_finetuned model
Model on translating legal text from Spanish to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_es_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_es_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Spanish to French.
### How to use
Here is how to use this model to translate legal text from Spanish to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_es_fr_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_es_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "Pide a las autoridades eritreas que levanten la prohibición de prensa independiente en el país y que liberen de inmediato a los periodistas independientes y a todos los demás encarcelados por el simple hecho de haber ejercido su derecho a la libertad de expresión;"
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_trans_es_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_es_fr_small_finetuned | 52.694|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_es_it | 32dcca84549024c4a0ea34f9faee84fcf1ce1799 | 2021-06-23T09:47:25.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_es_it | 1 | null | transformers | 28,317 | Entry not found |
SEBIS/legal_t5_small_trans_sv_it | d60bcc7810483137215784435fca8041d12b9cb7 | 2021-06-23T10:11:48.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish Italian model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_sv_it | 1 | null | transformers | 28,318 |
---
language: Swedish Italian
tags:
- translation Swedish Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Den 25 juni 2002 lade kommissionen fram ett förslag till förordning om ”kontroller av kontanta medel som förs in i eller ut ur gemenskapen” i syfte att komplettera direktiv 91/308/EEG om penningtvätt."
---
# legal_t5_small_trans_sv_it model
Model on translating legal text from Swedish to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Italian.
### How to use
Here is how to use this model to translate legal text from Swedish to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Den 25 juni 2002 lade kommissionen fram ett förslag till förordning om ”kontroller av kontanta medel som förs in i eller ut ur gemenskapen” i syfte att komplettera direktiv 91/308/EEG om penningtvätt."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_it | 42.577|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SJSui/AstroBot | 86d9f5e9e89669743de8d7521758434fea86366e | 2021-11-17T00:39:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | SJSui | null | SJSui/AstroBot | 1 | null | transformers | 28,319 | Entry not found |
SJSui/RickBot | 6b73748eb1e1b8e50779271de792e63804dce8af | 2021-11-17T01:29:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | SJSui | null | SJSui/RickBot | 1 | null | transformers | 28,320 | ---
tags:
- conversational
---
# RickBot |
Sadaf/God | 610f42015e86af624510e437c78e9003cd4fc791 | 2021-10-25T03:54:50.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | Sadaf | null | Sadaf/God | 1 | null | transformers | 28,321 | Entry not found |
SalmanMo/ALBERT_QA_1e | cbbbbac91311c3012a1c088fa9806e054fadf853 | 2020-08-04T14:44:32.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SalmanMo | null | SalmanMo/ALBERT_QA_1e | 1 | null | transformers | 28,322 | Entry not found |
Santiagot1105/wav2vec2-lar-xlsr-es-col | 41363e6072cac4cf771b0c6134194a7ff1bddb78 | 2022-02-22T20:58:23.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Santiagot1105 | null | Santiagot1105/wav2vec2-lar-xlsr-es-col | 1 | null | transformers | 28,323 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-lar-xlsr-es-col
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-lar-xlsr-es-col
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-spanish](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0947
- Wer: 0.1884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8446 | 8.51 | 400 | 2.8174 | 0.9854 |
| 0.5146 | 17.02 | 800 | 0.1022 | 0.2020 |
| 0.0706 | 25.53 | 1200 | 0.0947 | 0.1884 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Santiagot1105/wav2vec2-large-xlsr-finetune-es-col | 40c900caeb35fc085dfec89cf10fa67290796903 | 2022-02-21T21:19:46.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Santiagot1105 | null | Santiagot1105/wav2vec2-large-xlsr-finetune-es-col | 1 | null | transformers | 28,324 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-finetune-es-col
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-finetune-es-col
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6514
- Wer: 0.9874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9709 | 3.25 | 400 | 2.9673 | 1.0 |
| 2.9488 | 6.5 | 800 | 2.9075 | 0.9973 |
| 2.907 | 9.76 | 1200 | 2.8772 | 0.9688 |
| 2.886 | 13.01 | 1600 | 2.8245 | 0.9484 |
| 2.8043 | 16.26 | 2000 | 2.7134 | 0.9874 |
| 2.7288 | 19.51 | 2400 | 2.6750 | 0.9874 |
| 2.7072 | 22.76 | 2800 | 2.6651 | 0.9874 |
| 2.6892 | 26.02 | 3200 | 2.6573 | 0.9874 |
| 2.683 | 29.27 | 3600 | 2.6514 | 0.9874 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
SauravMaheshkar/bert-base-cased-chaii | a83da6b86ae2c86e30494599f72b4a02815abaa7 | 2021-10-14T11:52:17.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/bert-base-cased-chaii | 1 | null | transformers | 28,325 | Entry not found |
SauravMaheshkar/bert-base-multilingual-cased-finetuned-chaii | 2e1178c71c13eea942218254a7b6009eea48c92e | 2021-10-13T18:17:55.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/bert-base-multilingual-cased-finetuned-chaii | 1 | null | transformers | 28,326 | Entry not found |
SauravMaheshkar/bert-large-uncased-whole-word-masking-finetuned-chaii | 335fb6737da97714274a6e228fe2764988af363f | 2021-10-14T16:13:30.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/bert-large-uncased-whole-word-masking-finetuned-chaii | 1 | null | transformers | 28,327 | Entry not found |
SauravMaheshkar/clr-pretrained-albert-base | 3ab80cf056b3ae7eca44a6e3485b8d6cbce78cc5 | 2021-09-23T15:57:51.000Z | [
"pytorch",
"albert",
"fill-mask",
"dataset:Commonlit-Readibility",
"transformers",
"kaggle",
"license:cc0-1.0",
"autotrain_compatible"
] | fill-mask | false | SauravMaheshkar | null | SauravMaheshkar/clr-pretrained-albert-base | 1 | null | transformers | 28,328 | ---
thumbnail: https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true
tags:
- kaggle
license: cc0-1.0
datasets:
- Commonlit-Readibility
metrics:
- Perplexity
---

# PreTraining
| **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** |
|:-----------------------:|:---------------:|:----------------:|:----------------------:|
| roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** |
| bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 |
| electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 |
| albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 |
| electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 |
| electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 |
| distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
|
SauravMaheshkar/electra-base-chaii | 0fa01b4e63d36f7cf738a719745f31d856189ec4 | 2021-10-14T12:40:27.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/electra-base-chaii | 1 | null | transformers | 28,329 | Entry not found |
SauravMaheshkar/xlm-roberta-base-chaii | 0651c832c31f3f4be3bc13245e288cd014022347 | 2021-10-14T06:31:30.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SauravMaheshkar | null | SauravMaheshkar/xlm-roberta-base-chaii | 1 | null | transformers | 28,330 | Entry not found |
Science-geek32/DialoGPT-small-doctor | 55044198c34817a7354443a68edfd0e84aba8c36 | 2021-10-19T17:51:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Science-geek32 | null | Science-geek32/DialoGPT-small-doctor | 1 | null | transformers | 28,331 | ---
tags:
- conversational
---
#13th Doctor DialoGPT model |
Scoops/SandalBot | b43e50a1ce4b41830ed4800db62d9d67595add38 | 2021-06-04T01:12:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Scoops | null | Scoops/SandalBot | 1 | null | transformers | 28,332 | ---
tags:
- conversational
---
# Sandal Bot
Quick and dumb model for a discord chat bot. Based on DialoGPT-Medium |
Sebastianthecrab/DialoGPT-small-melchior | d9519091e26e4464beb15737ddbf0948c0235645 | 2022-01-29T23:53:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Sebastianthecrab | null | Sebastianthecrab/DialoGPT-small-melchior | 1 | null | transformers | 28,333 | ---
tags:
- conversational
---
# Melchior DialoGPT Model |
Semih/wav2vec2_Irish_Large | 6e55277cfc008e631486bb5477970f1d710b2275 | 2021-07-05T17:32:43.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ga-IE",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Semih | null | Semih/wav2vec2_Irish_Large | 1 | null | transformers | 28,334 | ---
language: ga-IE
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Irish by Semih GULUM
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice gle
type: common_voice
args: ga-IE
metrics:
- name: Test WER
type: wer
---
# wav2vec2-irish-lite Speech to Text
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ga-IE", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Semih/wav2vec2_Irish_Large")
model = Wav2Vec2ForCTC.from_pretrained("Semih/wav2vec2_Irish_Large")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
Test Result: 55.11 |
Shauli/IE-metric-model-spike | b85085569947b27651cfa0acc1d71283d521695d | 2021-05-18T22:33:59.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Shauli | null | Shauli/IE-metric-model-spike | 1 | null | transformers | 28,335 | Entry not found |
ShayoGun/DialoGPT-small-shayo | fb53e43408a81e0a58fe15cec098e09c7f4851e6 | 2021-12-23T09:11:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ShayoGun | null | ShayoGun/DialoGPT-small-shayo | 1 | null | transformers | 28,336 | ---
tags:
- conversational
---
# SHAY0 Dialo GPT Model |
ShengdingHu/adapter_roberta-base_rte | 3afdaaf550170ee71b8623ce615d5e7738dd689d | 2022-01-29T06:34:42.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_roberta-base_rte | 1 | null | transformers | 28,337 | Entry not found |
ShengdingHu/adapter_roberta-large_rte | 1355f45485d5c9533f2b8e6af6535e8dfb45eb91 | 2022-01-29T08:34:56.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_roberta-large_rte | 1 | null | transformers | 28,338 | Entry not found |
ShengdingHu/adapter_t5-base_cola | cdff133771c2b16a470a731817b07a353722845b | 2022-01-31T17:41:48.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_t5-base_cola | 1 | null | transformers | 28,339 | Entry not found |
ShengdingHu/adapter_t5-base_mnli | 336c90260dbec12d30528425161e74d0b1e16861 | 2022-02-01T00:24:38.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_t5-base_mnli | 1 | null | transformers | 28,340 | Entry not found |
ShengdingHu/adapter_t5-base_mrpc | a8e3942a19aec445e70b8f02aecdc0b890ab6d46 | 2022-02-13T15:09:30.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_t5-base_mrpc | 1 | null | transformers | 28,341 | Entry not found |
ShengdingHu/adapter_t5-base_qnli | 6a6a11e138a6ab89179d8ec4ed4773e726d82e56 | 2022-02-01T01:50:26.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_t5-base_qnli | 1 | null | transformers | 28,342 | Entry not found |
ShengdingHu/adapter_t5-base_qqp | 6291cef767ad4477383dba62fe767ef9a3277761 | 2022-02-01T05:44:32.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_t5-base_qqp | 1 | null | transformers | 28,343 | Entry not found |
ShengdingHu/adapter_t5-base_rte | 7af997cab5908e81f29a6797f153f6be47b3c3fb | 2022-01-31T17:16:48.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_t5-base_rte | 1 | null | transformers | 28,344 | Entry not found |
ShengdingHu/adapter_t5-base_sst2 | f1f4ff8671ec9200ee14ec93054b52bdfa4b6db5 | 2022-01-31T17:52:55.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_t5-base_sst2 | 1 | null | transformers | 28,345 | Entry not found |
ShengdingHu/adapter_t5-base_stsb | e0b4aa0974a7289ec45093131464c7731de8a421 | 2022-01-31T18:18:06.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_t5-base_stsb | 1 | null | transformers | 28,346 | Entry not found |
ShengdingHu/adapter_t5-base_superglue-boolq | f72a11c74b6c4ee133aed89bee6a1d817f8fec41 | 2022-01-31T18:58:03.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_t5-base_superglue-boolq | 1 | null | transformers | 28,347 | Entry not found |
ShengdingHu/adapter_t5-base_superglue-cb | 478657060640b338814d3be7402d47b5815589c8 | 2022-01-31T19:02:36.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_t5-base_superglue-cb | 1 | null | transformers | 28,348 | Entry not found |
ShengdingHu/adapter_t5-base_superglue-copa | d8304a35422ed2148e5576fbf26802b4efbda1d8 | 2022-01-31T17:13:03.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_t5-base_superglue-copa | 1 | null | transformers | 28,349 | Entry not found |
ShengdingHu/adapter_t5-base_superglue-record | 1b0de959d436d8e705cdd9f5c7b7dcfe9e99b3b1 | 2022-01-31T22:45:21.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_t5-base_superglue-record | 1 | null | transformers | 28,350 | Entry not found |
ShengdingHu/adapter_t5-base_superglue-wic | 1f2fbead692efdc9d1fae1d710fc70ceca6484bf | 2022-01-31T23:15:45.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_t5-base_superglue-wic | 1 | null | transformers | 28,351 | Entry not found |
ShengdingHu/adapter_t5-base_superglue-wsc.fixed | c9ec49ad514689a7813ccea13ad47be623280883 | 2022-01-31T23:22:20.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/adapter_t5-base_superglue-wsc.fixed | 1 | null | transformers | 28,352 | Entry not found |
ShengdingHu/autodelta_try | b4d844d2c36532910fe1cd67b1662354044b7d24 | 2022-01-13T09:21:08.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/autodelta_try | 1 | null | transformers | 28,353 | Entry not found |
ShengdingHu/bitfit_roberta-base_rte | 8ae96352774b29ee643f7d0383858ae69fb2bc88 | 2022-01-29T03:37:50.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_roberta-base_rte | 1 | null | transformers | 28,354 | Entry not found |
ShengdingHu/bitfit_t5-base_mnli | 38d5486c43f97298309b8bdf9ee375bcc88fcde0 | 2022-01-31T14:26:04.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_t5-base_mnli | 1 | null | transformers | 28,355 | Entry not found |
ShengdingHu/bitfit_t5-base_mrpc | bdb70888e90aa4f28002e34ec5efe0190d1cd3ca | 2022-02-14T12:33:37.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_t5-base_mrpc | 1 | null | transformers | 28,356 | Entry not found |
ShengdingHu/bitfit_t5-base_qnli | 4e5c1cbf84142e8df2eaca090bfcb3ed4ae3444a | 2022-01-31T14:57:09.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_t5-base_qnli | 1 | null | transformers | 28,357 | Entry not found |
ShengdingHu/bitfit_t5-base_qqp | 8672b037be51667e818145236a16df41bfd01de7 | 2022-01-31T16:06:49.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_t5-base_qqp | 1 | null | transformers | 28,358 | Entry not found |
ShengdingHu/bitfit_t5-base_rte | 0a3dd07f894a22f628f39b6943b022e875be793c | 2022-01-31T12:32:27.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_t5-base_rte | 1 | null | transformers | 28,359 | Entry not found |
ShengdingHu/bitfit_t5-base_sst2 | 9f9279cc5972832b5cb507851c7b52a996bf87e9 | 2022-01-31T12:46:21.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_t5-base_sst2 | 1 | null | transformers | 28,360 | Entry not found |
ShengdingHu/bitfit_t5-base_superglue-boolq | f7e62ce740bef501ee9f57074c24d2c93844261f | 2022-01-31T13:53:52.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_t5-base_superglue-boolq | 1 | null | transformers | 28,361 | Entry not found |
ShengdingHu/bitfit_t5-base_superglue-cb | e18b37acfe3ce8960eaa4e0aa8abde552e5ac0e7 | 2022-01-31T13:57:51.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_t5-base_superglue-cb | 1 | null | transformers | 28,362 | Entry not found |
ShengdingHu/bitfit_t5-base_superglue-copa | dbd006c21b0392a55b3179484476a0bce7baf99d | 2022-01-31T12:28:15.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_t5-base_superglue-copa | 1 | null | transformers | 28,363 | Entry not found |
ShengdingHu/bitfit_t5-base_superglue-multirc | db155e7b20702b724ad9f28c46ddb149ca1c2607 | 2022-01-31T13:18:28.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_t5-base_superglue-multirc | 1 | null | transformers | 28,364 | Entry not found |
ShengdingHu/bitfit_t5-base_superglue-record | 2363e2abc74f958e2d9d543550fb4b56d3a48412 | 2022-01-31T14:57:46.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_t5-base_superglue-record | 1 | null | transformers | 28,365 | Entry not found |
ShengdingHu/bitfit_t5-base_superglue-wic | 1100551489c28bbe49633d89a8651a8245cbd376 | 2022-01-31T15:11:25.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_t5-base_superglue-wic | 1 | null | transformers | 28,366 | Entry not found |
ShengdingHu/bitfit_t5-base_superglue-wsc.fixed | f04e0c234039325a86a7c4f5babc41ef556eda3b | 2022-01-31T16:30:04.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/bitfit_t5-base_superglue-wsc.fixed | 1 | null | transformers | 28,367 | Entry not found |
ShengdingHu/cola | 8438c26ae8a9c3ded492a535b29736384189c7e3 | 2022-02-03T16:58:23.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ShengdingHu | null | ShengdingHu/cola | 1 | null | transformers | 28,368 | Entry not found |
ShengdingHu/compacter_t5-base_cola | 146c8ffaf92791d4c47d1bb60b38a266a9387a7c | 2022-02-02T15:48:33.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/compacter_t5-base_cola | 1 | null | transformers | 28,369 | Entry not found |
ShengdingHu/compacter_t5-base_mnli | e86e895a3c1466f7a7fb2b700fa4c26a7528e719 | 2022-02-02T21:40:35.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/compacter_t5-base_mnli | 1 | null | transformers | 28,370 | Entry not found |
ShengdingHu/compacter_t5-base_mrpc | 56f011887d1210963bb07830ab9b2d81730108f4 | 2022-02-02T10:10:38.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/compacter_t5-base_mrpc | 1 | null | transformers | 28,371 | Entry not found |
ShengdingHu/compactor_roberta-base_rte | cd6c152cf25304fdbb77fcedd8f80792bcb2219a | 2022-01-29T07:51:17.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/compactor_roberta-base_rte | 1 | null | transformers | 28,372 | Entry not found |
ShengdingHu/lora_t5-base_cola | a7f57ac8ec6c6df56146cac87f67879d33a02366 | 2022-02-02T07:55:42.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/lora_t5-base_cola | 1 | null | transformers | 28,373 | Entry not found |
ShengdingHu/lora_t5-base_mnli | 2c477e98283632b4e2b6b3ff59f748991fe55148 | 2022-02-02T12:19:26.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/lora_t5-base_mnli | 1 | null | transformers | 28,374 | Entry not found |
ShengdingHu/lora_t5-base_qnli | 8ba141d9ea55bd02db4d1f2d37f1090ca81b5bda | 2022-02-02T13:24:11.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/lora_t5-base_qnli | 1 | null | transformers | 28,375 | Entry not found |
ShengdingHu/lora_t5-base_qqp | 240c21abd1d134d263a7c2a42a3d055bb27a5a08 | 2022-02-02T15:56:01.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/lora_t5-base_qqp | 1 | null | transformers | 28,376 | Entry not found |
ShengdingHu/lora_t5-base_rte | e7a9c5a06874a5e84dc516f93f2c25a7c25c8dd9 | 2022-02-02T07:44:08.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/lora_t5-base_rte | 1 | null | transformers | 28,377 | Entry not found |
ShengdingHu/lora_t5-base_sst2 | f84a35fcbc8900165317086e333005dc319a7fc1 | 2022-02-02T08:02:58.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/lora_t5-base_sst2 | 1 | null | transformers | 28,378 | Entry not found |
ShengdingHu/lora_t5-base_superglue-cb | 9b53f2438ccf4e90f9de5440e7cf20ae97e6f346 | 2022-02-02T08:21:03.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/lora_t5-base_superglue-cb | 1 | null | transformers | 28,379 | Entry not found |
ShengdingHu/lora_t5-base_superglue-copa | 19e782502949190a97f981f4baa6e800f5950f88 | 2022-02-02T07:40:57.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/lora_t5-base_superglue-copa | 1 | null | transformers | 28,380 | Entry not found |
ShengdingHu/lora_t5-base_superglue-multirc | 6e23ab60d999cf76526683a8633dd10400b4f85a | 2022-02-02T07:54:08.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/lora_t5-base_superglue-multirc | 1 | null | transformers | 28,381 | Entry not found |
ShengdingHu/lora_t5-base_superglue-record | 79626087e78c6de6e9ace91dc16c356692c4d6a2 | 2022-02-02T10:10:31.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/lora_t5-base_superglue-record | 1 | null | transformers | 28,382 | Entry not found |
ShengdingHu/lora_t5-base_superglue-wic | 01e6505c5e89bdd17a2c876adf44e488720e0da2 | 2022-02-02T10:30:44.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/lora_t5-base_superglue-wic | 1 | null | transformers | 28,383 | Entry not found |
ShengdingHu/low_rank_adapter_roberta-base_rte | e3553380c77285144eac1e16b343bd47679a5c69 | 2022-01-29T06:49:02.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/low_rank_adapter_roberta-base_rte | 1 | null | transformers | 28,384 | Entry not found |
ShengdingHu/mnli | 15c06930050c55631609202d97e2785b371bdcd8 | 2022-02-02T21:38:29.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ShengdingHu | null | ShengdingHu/mnli | 1 | null | transformers | 28,385 | Entry not found |
ShengdingHu/mrpc | 427c7b6b7b2203f89acd7b3c29b16b2ba9d24ca6 | 2022-02-13T17:52:42.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ShengdingHu | null | ShengdingHu/mrpc | 1 | null | transformers | 28,386 | Entry not found |
ShengdingHu/prefix_roberta-base_mrpc | 09ec0f8cf92c6c3dc98a9e5951f44ad6e09c0d3f | 2022-02-14T12:40:11.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/prefix_roberta-base_mrpc | 1 | null | transformers | 28,387 | Entry not found |
ShengdingHu/prefix_t5-base_mrpc | 15aec5ccbc29c8e4d57febf1587082342868e5f6 | 2022-02-12T05:59:14.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/prefix_t5-base_mrpc | 1 | null | transformers | 28,388 | Entry not found |
ShengdingHu/rte | e85956c11cb2191ca647d9c74d2d45878fc70d00 | 2022-02-02T07:43:41.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ShengdingHu | null | ShengdingHu/rte | 1 | null | transformers | 28,389 | Entry not found |
ShengdingHu/soft_prompt_t5-base_cola | 7e68ee6fc9cd3779fdcaff843b4023a5470c99b3 | 2022-02-03T16:58:52.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/soft_prompt_t5-base_cola | 1 | null | transformers | 28,390 | Entry not found |
ShengdingHu/soft_prompt_t5-base_mrpc | a7ac8dabd71fd43334364a891278ddc034eaef2d | 2022-02-04T03:19:21.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/soft_prompt_t5-base_mrpc | 1 | null | transformers | 28,391 | Entry not found |
ShengdingHu/soft_prompt_t5-base_sst2 | 8f59b8dfd64ba6d4dfc38503bba9698c53fbd089 | 2022-02-04T03:08:58.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/soft_prompt_t5-base_sst2 | 1 | null | transformers | 28,392 | Entry not found |
ShengdingHu/stsb | 6fa6ca27060c6878e1a2bc8e4c1a2dd21913deea | 2022-02-04T11:14:21.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ShengdingHu | null | ShengdingHu/stsb | 1 | null | transformers | 28,393 | Entry not found |
ShengdingHu/superglue-boolq-multig | b43b9c34ed79b1d99ae69de8da76452e58449269 | 2022-01-30T13:14:33.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ShengdingHu | null | ShengdingHu/superglue-boolq-multig | 1 | null | transformers | 28,394 | Entry not found |
ShengdingHu/superglue-cb | e0dc0d9750f098dff27bc44f7a5ecc816fbb4b30 | 2022-02-02T08:20:32.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ShengdingHu | null | ShengdingHu/superglue-cb | 1 | null | transformers | 28,395 | Entry not found |
ShengdingHu/superglue-copa | 1335ff274098bb06604fecc6b8237964d26cc39c | 2022-02-02T07:40:21.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ShengdingHu | null | ShengdingHu/superglue-copa | 1 | null | transformers | 28,396 | Entry not found |
ShengdingHu/superglue-record | fa2cb3fd737d9e461a0522b889739a87f53dbd70 | 2022-02-02T10:06:08.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ShengdingHu | null | ShengdingHu/superglue-record | 1 | null | transformers | 28,397 | Entry not found |
ShengdingHu/superglue-wsc.fixed | 2a275687860949eecbddf73e81582746fad1c1ee | 2022-02-02T10:34:18.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ShengdingHu | null | ShengdingHu/superglue-wsc.fixed | 1 | null | transformers | 28,398 | Entry not found |
ShreyaH/DialoGPT-small-harrypotter | 980e2cf1b5e5bd5379e8a71bf99074619e535541 | 2021-08-27T04:52:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ShreyaH | null | ShreyaH/DialoGPT-small-harrypotter | 1 | null | transformers | 28,399 | ---
tags:
- conversational
---
#Harry Potter DialoGPT Model |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.