modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Niphredil/DialoGPT-small-lotr | 4eff8f211d7173a4841c2861c0d0355668eb5e96 | 2021-08-27T21:32:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Niphredil | null | Niphredil/DialoGPT-small-lotr | 1 | null | transformers | 28,200 | ---
tags:
- conversational
---
#LOTR DialoGPT Model |
NlpHUST/electra-legal-vi | 5f98929d520eb48da0e8d0062dfb2aea8a70ae8f | 2021-11-30T14:49:57.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | NlpHUST | null | NlpHUST/electra-legal-vi | 1 | null | transformers | 28,201 | Entry not found |
NoLawz/DialoGPT-medium-harrypotter | ecc07d24f5efb2741e0399316117ce258c4a5ea1 | 2021-08-27T08:23:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | NoLawz | null | NoLawz/DialoGPT-medium-harrypotter | 1 | null | transformers | 28,202 | ---
tags:
- conversational
---
# Harry Potter DialoGPT medium model |
Norrawee/monsoon-ner | 305180828a719f6c9299e6d2004e512de4c2a6cc | 2022-02-16T06:22:30.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Norrawee | null | Norrawee/monsoon-ner | 1 | null | transformers | 28,203 | Entry not found |
Norrawee/wangchanberta-ner-2 | dca76aa5fa39fd3046e2d35ca62c373d25aebcf2 | 2022-02-17T05:45:26.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Norrawee | null | Norrawee/wangchanberta-ner-2 | 1 | null | transformers | 28,204 | Entry not found |
Norrawee/wangchanberta-w10 | 9d84ba86a6c9d4206f52ff3c783d82863aceb668 | 2022-02-17T05:51:03.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Norrawee | null | Norrawee/wangchanberta-w10 | 1 | null | transformers | 28,205 | Entry not found |
NtDNlp/cmcbert | 9c9e8bce9ed74a6d330753cdec570748f008e603 | 2021-04-23T01:25:55.000Z | [
"pytorch",
"transformers"
] | null | false | NtDNlp | null | NtDNlp/cmcbert | 1 | null | transformers | 28,206 | |
Ogayo/mt-ach-en | 92fdc7a4f2c81cb8d696b23781d568b09144604f | 2021-04-23T06:46:11.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ogayo | null | Ogayo/mt-ach-en | 1 | null | transformers | 28,207 | Entry not found |
Ogayo/mt-en-ach | 4b621df28014e3126841d782f8236cbacb37cdba | 2021-04-23T06:42:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ogayo | null | Ogayo/mt-en-ach | 1 | null | transformers | 28,208 | Entry not found |
Ogayo/mt-en-adh | 322cd640a727b0123c261503908d5a3fe65385f9 | 2021-04-23T05:00:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ogayo | null | Ogayo/mt-en-adh | 1 | null | transformers | 28,209 | Entry not found |
Oji/DialoGPT-small-Rick | 126ee3e5ca4b87f2c257f623f9bdff31a7b3eeda | 2022-02-04T11:41:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Oji | null | Oji/DialoGPT-small-Rick | 1 | null | transformers | 28,210 | ---
tags:
- conversational
---
# Rick and Morty DialoGPT Model |
Optimal/Harry | aca019d6b4b21df01d1768cac70b0641dbd2c04d | 2021-12-15T09:52:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Optimal | null | Optimal/Harry | 1 | null | transformers | 28,211 | ---
tags:
- conversational
---
#harry potter dialogpt model |
OscarNav/dialoGPT_translate | 9aad3030837254dd07f53b456cff375694cfded5 | 2021-12-03T01:30:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | OscarNav | null | OscarNav/dialoGPT_translate | 1 | null | transformers | 28,212 | # Finetuned DialoGPT model for Eng-Spa translation
DialoGPT-small model was used and finetuned on English to Spanish translations, extracted from http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip
some examples of translations
| Role | Response |
| :---: |------------------------|
| User | please, sing me a song |
| Bot | Por favor, canta una canción. |
| User | I really want to go to China |
| Bot | Realmente quiero ir a China. |
| User | Can you do me a favor? |
| Bot | ¿Me puedes hacer un favor? |
| User | I don't know what you are talking about |
| Bot | No sé de qué estás hablando. |
| User | I don't want to go to China |
| Bot | No quiero ir a China. |
# Using the model
example code for trying out the model
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('microsoft/DialoGPT-small')
model = AutoModelWithLMHead.from_pretrained('OscarNav/dialoGPT_translate')
# Let's traslate 5 sentences
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
new_user_input_ids, max_length=1000,
pad_token_id=tokenizer.eos_token_id,
top_p=0.92, top_k = 50
)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, new_user_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
OwOmeister/DialoGPT-small-rick | 52afa22facb04f970a9f2ed157a61304cccfbbfa | 2021-09-05T07:54:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | OwOmeister | null | OwOmeister/DialoGPT-small-rick | 1 | null | transformers | 28,213 | ---
tags:
- conversational
---
#rick |
OwOmeister/killme | ce21f4e0d127c303dcdcbceda9bf751d5773dedd | 2021-09-05T08:49:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | OwOmeister | null | OwOmeister/killme | 1 | null | transformers | 28,214 | ---
tags:
- conversational
---
#ughhh |
P4RZ1V4L/DialoGPT-Medium-Tony | af2e6dbb1fd50b73aac153cd43e7a922f3943e9e | 2022-03-06T12:00:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | P4RZ1V4L | null | P4RZ1V4L/DialoGPT-Medium-Tony | 1 | null | transformers | 28,215 | ---
tags:
- conversational
---
0 Tony Stark DialoGPT Model |
Palak/albert-base-v2_squad | af0741963b922a7832e09d5a6a6845a8277771b9 | 2021-12-24T18:16:45.000Z | [
"pytorch",
"albert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Palak | null | Palak/albert-base-v2_squad | 1 | null | transformers | 28,216 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: albert-base-v2_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2_squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the **squadV1** dataset.
- "eval_exact_match": 82.69631031220435
- "eval_f1": 90.10806626207174
- "eval_samples": 10808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Palak/google_electra-small-discriminator_squad | 3061981233cf713b36de4f507e3c2594aca81844 | 2021-12-24T18:15:49.000Z | [
"pytorch",
"electra",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Palak | null | Palak/google_electra-small-discriminator_squad | 1 | null | transformers | 28,217 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: google_electra-small-discriminator_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google_electra-small-discriminator_squad
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the **squadV1** dataset.
- "eval_exact_match": 76.95364238410596
- "eval_f1": 84.98869246841396
- "eval_samples": 10784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
PedroR/xlm-roberta-7-pretrained | 40cefafed12a3753e25c93bd222b7eb5192aa62f | 2021-07-29T10:50:38.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | PedroR | null | PedroR/xlm-roberta-7-pretrained | 1 | null | transformers | 28,218 | Entry not found |
PereLluis13/wav2vec2-large-xlsr-53-ca | 09948ea6a5817ba50fcc7bd33b701c97f4dc570e | 2022-02-04T14:25:57.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | PereLluis13 | null | PereLluis13/wav2vec2-large-xlsr-53-ca | 1 | null | transformers | 28,219 | Entry not found |
PereLluis13/wav2vec2-xls-r-1b-ca-old | 0665b23bd031b5326302ea37efb801a512bcaed0 | 2022-02-03T10:40:41.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | PereLluis13 | null | PereLluis13/wav2vec2-xls-r-1b-ca-old | 1 | null | transformers | 28,220 | Entry not found |
PereLluis13/wav2vec2-xls-r-300m-ca | a7ec237b899e4d4a5ca950e1d39dba46af9f5057 | 2022-03-29T08:43:53.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ca",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:collectivat/tv3_parla",
"dataset:projecte-aina/parlament_parla",
"transformers",
"collectivat/tv3_parla",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"projecte-aina/parlament_parla",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | PereLluis13 | null | PereLluis13/wav2vec2-xls-r-300m-ca | 1 | 1 | transformers | 28,221 | ---
language:
- ca
license: apache-2.0
tags:
- automatic-speech-recognition
- collectivat/tv3_parla
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- projecte-aina/parlament_parla
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
- collectivat/tv3_parla
- projecte-aina/parlament_parla
model-index:
- name: wav2vec2-xls-r-300m-ca
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_8_0 ca
type: mozilla-foundation/common_voice_8_0
args: ca
metrics:
- name: Test WER
type: wer
value: 13.170091241317552
- name: Test CER
type: cer
value: 3.356726205534543
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: projecte-aina/parlament_parla ca
type: projecte-aina/parlament_parla
args: clean
metrics:
- name: Test WER
type: wer
value: 8.048005647723261
- name: Test CER
type: cer
value: 2.240912911020065
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: collectivat/tv3_parla ca
type: collectivat/tv3_parla
args: ca
metrics:
- name: Test WER
type: wer
value: 23.320629787889285
- name: Test CER
type: cer
value: 10.439216202089989
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: speech-recognition-community-v2/dev_data ca
type: speech-recognition-community-v2/dev_data
args: ca
metrics:
- name: Test WER
type: wer
value: 31.99671115046487
- name: Test CER
type: cer
value: 15.820020687277325
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ca
metrics:
- name: Test WER
type: wer
value: 22.04
---
# wav2vec2-xls-r-300m-ca
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets.
It achieves the following results on the evaluation set (for the three datasets):
- Loss: 0.2472
- Wer: 0.1499
## Model description
Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model.
## Intended uses & limitations
As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language.
## Training and evaluation data
More information needed
## Training procedure
The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 18.0
- mixed_precision_training: Native AMP
### Training results
Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.2099 | 0.09 | 500 | 3.4125 | 1.0 |
| 2.9961 | 0.18 | 1000 | 2.9224 | 1.0 |
| 2.2147 | 0.26 | 1500 | 0.6521 | 0.5568 |
| 1.3017 | 0.35 | 2000 | 0.3153 | 0.2761 |
| 1.1196 | 0.44 | 2500 | 0.2444 | 0.2367 |
| 1.0712 | 0.53 | 3000 | 0.2324 | 0.2132 |
| 1.052 | 0.62 | 3500 | 0.2173 | 0.2032 |
| 1.2813 | 2.13 | 4000 | 0.3326 | 0.2099 |
| 1.2365 | 2.4 | 4500 | 0.3224 | 0.2003 |
| 1.2193 | 2.66 | 5000 | 0.3198 | 0.1957 |
| 1.2072 | 2.93 | 5500 | 0.3063 | 0.1933 |
| 1.213 | 3.2 | 6000 | 0.3051 | 0.1980 |
| 1.2074 | 3.46 | 6500 | 0.3012 | 0.1879 |
| 1.1918 | 3.73 | 7000 | 0.2947 | 0.1829 |
| 1.1893 | 4.0 | 7500 | 0.2895 | 0.1807 |
| 1.1751 | 4.26 | 8000 | 0.2878 | 0.1776 |
| 1.1628 | 4.53 | 8500 | 0.2835 | 0.1731 |
| 1.1577 | 4.79 | 9000 | 0.2816 | 0.1761 |
| 1.1448 | 5.06 | 9500 | 0.2757 | 0.1740 |
| 1.1407 | 5.33 | 10000 | 0.2768 | 0.1798 |
| 1.1401 | 5.59 | 10500 | 0.2780 | 0.1816 |
| 1.1333 | 5.86 | 11000 | 0.2748 | 0.1750 |
| 1.1571 | 6.13 | 11500 | 0.2808 | 0.1708 |
| 1.1505 | 6.39 | 12000 | 0.2726 | 0.1692 |
| 1.1519 | 6.66 | 12500 | 0.2749 | 0.1654 |
| 1.136 | 6.93 | 13000 | 0.2765 | 0.1643 |
| 1.1326 | 7.19 | 13500 | 0.2706 | 0.1668 |
| 1.1342 | 7.46 | 14000 | 0.2665 | 0.1638 |
| 1.1286 | 7.72 | 14500 | 0.2669 | 0.1636 |
| 1.1243 | 7.99 | 15000 | 0.2619 | 0.1623 |
| 1.1173 | 8.26 | 15500 | 0.2652 | 0.1604 |
| 1.1129 | 8.52 | 16000 | 0.2610 | 0.1598 |
| 1.1091 | 8.79 | 16500 | 0.2608 | 0.1584 |
| 1.1053 | 9.06 | 17000 | 0.2633 | 0.1664 |
| 1.1004 | 9.32 | 17500 | 0.2594 | 0.1662 |
| 1.0995 | 9.59 | 18000 | 0.2623 | 0.1569 |
| 1.0964 | 9.86 | 18500 | 0.2624 | 0.1597 |
| 1.09 | 10.12 | 19000 | 0.2577 | 0.1578 |
| 1.089 | 10.39 | 19500 | 0.2574 | 0.1531 |
| 1.0864 | 10.66 | 20000 | 0.2556 | 0.1546 |
| 1.0806 | 10.92 | 20500 | 0.2548 | 0.1583 |
| 1.0842 | 11.19 | 21000 | 0.2550 | 0.1542 |
| 1.0805 | 11.45 | 21500 | 0.2561 | 0.1524 |
| 1.0722 | 11.72 | 22000 | 0.2540 | 0.1566 |
| 1.0763 | 11.99 | 22500 | 0.2549 | 0.1572 |
| 1.0835 | 12.25 | 23000 | 0.2586 | 0.1521 |
| 1.0883 | 12.52 | 23500 | 0.2583 | 0.1519 |
| 1.0888 | 12.79 | 24000 | 0.2551 | 0.1582 |
| 1.0933 | 13.05 | 24500 | 0.2628 | 0.1537 |
| 1.0799 | 13.32 | 25000 | 0.2600 | 0.1508 |
| 1.0804 | 13.59 | 25500 | 0.2620 | 0.1475 |
| 1.0814 | 13.85 | 26000 | 0.2537 | 0.1517 |
| 1.0693 | 14.12 | 26500 | 0.2560 | 0.1542 |
| 1.0724 | 14.38 | 27000 | 0.2540 | 0.1574 |
| 1.0704 | 14.65 | 27500 | 0.2548 | 0.1626 |
| 1.0729 | 14.92 | 28000 | 0.2548 | 0.1601 |
| 1.0724 | 15.18 | 28500 | 0.2511 | 0.1512 |
| 1.0655 | 15.45 | 29000 | 0.2498 | 0.1490 |
| 1.0608 | 15.98 | 30000 | 0.2487 | 0.1481 |
| 1.0541 | 16.52 | 31000 | 0.2468 | 0.1504 |
| 1.0584 | 17.05 | 32000 | 0.2467 | 0.1493 |
| 1.0507 | 17.58 | 33000 | 0.2481 | 0.1517 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
# Thanks
Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
|
Phantomhive/Noelle-bot | 2512b6083e45646bdd681336c0138b1bae06921c | 2021-06-26T16:10:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Phantomhive | null | Phantomhive/Noelle-bot | 1 | null | transformers | 28,222 | |
Phiion/DialoGPT-large-dilucbot | b4af998b2c8b52287e8be731d3302f59975ec5bf | 2022-01-17T18:29:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Phiion | null | Phiion/DialoGPT-large-dilucbot | 1 | null | transformers | 28,223 | Entry not found |
PhilipTheGreat/DiabloGPT-small-Traveller | beb611e5292b7b58f43b441eaa09cb937cae5a1e | 2021-09-19T05:19:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | PhilipTheGreat | null | PhilipTheGreat/DiabloGPT-small-Traveller | 1 | null | transformers | 28,224 | ---
tags:
- conversational
---
#Traveller DiabloGPT Model |
Poly-Pixel/shrek-medium-full | ae9e43b75aa07898c4ac0bfaf42967f25a38e11d | 2021-09-01T00:03:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Poly-Pixel | null | Poly-Pixel/shrek-medium-full | 1 | null | transformers | 28,225 | ---
tags:
- conversational
---
Shrek, with all 4 scripts! |
Poly-Pixel/shrek-test-small | ee4ba81dbc3a9b6b3bc8873aadf6e00b539f52f3 | 2021-08-28T21:45:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Poly-Pixel | null | Poly-Pixel/shrek-test-small | 1 | null | transformers | 28,226 | ---
tags:
- conversational
---
# Shrek Small DialoGPT Model |
Preeyank/roberta-base-education-domain | 22937f1539d2629a527ffdd24eb275b6b974eb78 | 2021-05-20T12:17:05.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Preeyank | null | Preeyank/roberta-base-education-domain | 1 | null | transformers | 28,227 | |
Pupihed/DialoGPT-small-shrek | d620bd42ab09526570594ce98d3ad43b04776a11 | 2021-09-02T01:22:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Pupihed | null | Pupihed/DialoGPT-small-shrek | 1 | null | transformers | 28,228 | ---
tags:
- conversational
---
# Shrek DialoGPT Model |
Pyke/DS-config-2 | af3c34876886f44686bfa2a348a8a3fe267e5604 | 2021-08-18T17:32:30.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/DS-config-2 | 1 | null | transformers | 28,229 | Entry not found |
Pyke/DS-config-5 | cd1d233af02b87b9e9ece350eb1d1b3d4242063b | 2021-08-18T18:16:59.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/DS-config-5 | 1 | null | transformers | 28,230 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-DS-04 | e90528cca6537f2ae6f366a889bfb30a2875b253 | 2021-08-18T03:24:20.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-DS-04 | 1 | null | transformers | 28,231 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-DS-2 | a1c44eed97b8a9eadd02202da789a387ebce5955 | 2021-08-18T02:19:55.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-DS-2 | 1 | null | transformers | 28,232 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-01 | 3dd5c054d8fee34a50352e40f44d81e5a9401ca9 | 2021-08-17T13:52:01.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-01 | 1 | null | transformers | 28,233 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-02 | e0f62ff48e48893e12af2701427ccb6d84aff2bb | 2021-08-17T13:54:20.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-02 | 1 | null | transformers | 28,234 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-05 | 46d57fe62bfe77a9f65bc529d4637e2895b8855f | 2021-08-17T14:00:53.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-05 | 1 | null | transformers | 28,235 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-1 | 3cd98db45af31ca66fd696236a6173abde34e260 | 2021-08-16T18:02:59.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-1 | 1 | null | transformers | 28,236 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-2 | db3b74b01be20f5063a236d3f1d1b6a965b74001 | 2021-08-17T17:06:29.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-2 | 1 | null | transformers | 28,237 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-4 | 64cc4969f910be9feb57989a4d0717f1391e32f8 | 2021-08-17T15:26:00.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-4 | 1 | null | transformers | 28,238 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test13 | f156120305d90aceda1034aff9463f4eb2dba269 | 2021-08-15T18:19:32.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test13 | 1 | null | transformers | 28,239 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test14 | eff98905b11058e4964a8f2fed6b06aea4be3e50 | 2021-08-15T18:22:09.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test14 | 1 | null | transformers | 28,240 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test15 | 0e82eeaae2a42314ceaf0c9e57ad1484d3904c00 | 2021-08-15T18:23:48.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test15 | 1 | null | transformers | 28,241 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test18 | db7301d19f2c9ae920396c11836805bf6ffbe065 | 2021-08-15T18:50:29.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test18 | 1 | null | transformers | 28,242 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test23 | 28f3c688feae731431ebffbc60a6b66d9a258969 | 2021-08-15T19:25:52.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test23 | 1 | null | transformers | 28,243 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test25 | fb822dc829349342afe0549ab3fb40901fa70f30 | 2021-08-15T19:49:18.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test25 | 1 | null | transformers | 28,244 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test27 | 7c0c69c592f26786746efa2a4338d9bbcc2104cb | 2021-08-16T15:27:08.000Z | [
"pytorch",
"bart",
"feature-extraction",
"transformers"
] | feature-extraction | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test27 | 1 | null | transformers | 28,245 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test28 | 66f9114a25476d1e35cc020ee7add47c9ed6c940 | 2021-08-16T15:30:39.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test28 | 1 | null | transformers | 28,246 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test29 | 09fe4678f79fa9325710f15cd8ccd2a2cbe8f425 | 2021-08-16T15:37:31.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test29 | 1 | null | transformers | 28,247 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test3 | d665933d47b9d96bf11dd942e499c6c8f466aeae | 2021-08-13T20:07:39.000Z | [
"pytorch",
"bart",
"transformers"
] | null | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test3 | 1 | null | transformers | 28,248 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test32 | e8558fa9a9a4787e38e065272f6ee1013b384133 | 2021-08-16T15:55:45.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test32 | 1 | null | transformers | 28,249 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test34 | 90388e2bcb67ad607b382ab7e346452ff02e4cd1 | 2021-08-16T16:00:54.000Z | [
"pytorch",
"bart",
"feature-extraction",
"transformers"
] | feature-extraction | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test34 | 1 | null | transformers | 28,250 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test36 | eea37a03cb645a01053e5f50505af8aea6e09c8f | 2021-08-16T16:13:02.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test36 | 1 | null | transformers | 28,251 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test4 | 75cab30c05ac439d6228ca07b6305dafaf6f209f | 2021-08-14T09:12:58.000Z | [
"pytorch",
"bart",
"transformers"
] | null | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test4 | 1 | null | transformers | 28,252 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test7 | 4be75a3bef828e4d5089dce18cca0d65217662a5 | 2021-08-14T18:12:46.000Z | [
"pytorch",
"bart",
"transformers"
] | null | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test7 | 1 | null | transformers | 28,253 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test8 | 9f5120e5b7e6f036c8fbb32dab01a2fd552dbf45 | 2021-08-15T04:34:43.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test8 | 1 | null | transformers | 28,254 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test9 | 869f3caec686df9a5b683d2c553072bb714872ad | 2021-08-15T17:42:32.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test9 | 1 | null | transformers | 28,255 | Entry not found |
Pyke/bart-finetuned-with-patent-test | 4906a32854ed67a4548528229dc258048de4a303 | 2021-08-06T16:42:21.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Pyke | null | Pyke/bart-finetuned-with-patent-test | 1 | null | transformers | 28,256 | Entry not found |
QuickRead/Reward_training_Pegasus_xsum | d5ecdbb615f7e3c2c414dd6dcd9778c4cd83f39b | 2022-02-09T13:23:35.000Z | [
"pytorch",
"pegasus",
"feature-extraction",
"transformers"
] | feature-extraction | false | QuickRead | null | QuickRead/Reward_training_Pegasus_xsum | 1 | null | transformers | 28,257 | Entry not found |
RASMUS/wav2vec2-xlsr-1b-et-lm | 0e53ecf23b5c8e2fc0a3f976dafeb102d5f9d1ce | 2022-02-05T22:16:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | RASMUS | null | RASMUS/wav2vec2-xlsr-1b-et-lm | 1 | null | transformers | 28,258 | Entry not found |
RASMUS/wav2vec2-xlsr-300-lm | 68eb49ae3ba86ffad150cd769b4f1d789d76699a | 2022-01-16T11:24:17.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | RASMUS | null | RASMUS/wav2vec2-xlsr-300-lm | 1 | null | transformers | 28,259 | Entry not found |
RASMUS/wav2vec2-xlsr-300-versatile-test | 95cfd36d3a104664c742811948239a49e2ce47a2 | 2022-01-09T04:29:43.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | RASMUS | null | RASMUS/wav2vec2-xlsr-300-versatile-test | 1 | null | transformers | 28,260 | Entry not found |
RASMUS/wav2vec2-xlsr-300m-et | 45f463bd872638a45df4cf3621d708428ac89fb6 | 2022-02-04T22:03:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | RASMUS | null | RASMUS/wav2vec2-xlsr-300m-et | 1 | null | transformers | 28,261 | Entry not found |
RASMUS/wav2vec2-xlsr-fi-train-aug-bigLM-1B | e27e2a5b323992973e27edeb0a8074b796564442 | 2022-01-27T23:00:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"mozilla-foundation/common_voice_7_0",
"audio",
"speech",
"model-index"
] | automatic-speech-recognition | false | RASMUS | null | RASMUS/wav2vec2-xlsr-fi-train-aug-bigLM-1B | 1 | null | transformers | 28,262 | ---
language: fi
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
- cer
tags:
- generated_from_trainer
- mozilla-foundation/common_voice_7_0
- audio
- automatic-speech-recognition
- speech
model-index:
- name: XLS-R 1B Wav2Vec2 Finnish by Rasmus Toivanen
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-fi-train-aug-lm-1B
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1499
- Wer: 0.1955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6473 | 0.29 | 400 | 0.2857 | 0.3825 |
| 0.6039 | 0.58 | 800 | 0.2459 | 0.3476 |
| 0.4757 | 0.87 | 1200 | 0.2338 | 0.3274 |
| 0.4473 | 1.15 | 1600 | 0.2246 | 0.3128 |
| 0.4322 | 1.44 | 2000 | 0.1962 | 0.2805 |
| 0.3961 | 1.73 | 2400 | 0.2070 | 0.2797 |
| 0.3642 | 2.02 | 2800 | 0.1790 | 0.2473 |
| 0.3561 | 2.31 | 3200 | 0.1769 | 0.2375 |
| 0.282 | 2.6 | 3600 | 0.1672 | 0.2263 |
| 0.2978 | 2.89 | 4000 | 0.1636 | 0.2192 |
| 0.2722 | 3.17 | 4400 | 0.1637 | 0.2102 |
| 0.2924 | 3.46 | 4800 | 0.1506 | 0.2021 |
| 0.2631 | 3.75 | 5200 | 0.1499 | 0.1955 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
RASMUS/wav2vec2-xlsr-fi-train-aug-lm-1B-lower-lr | 847116a5b693d79c94e3df4937325dcdf28caf7d | 2022-01-28T22:17:59.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | RASMUS | null | RASMUS/wav2vec2-xlsr-fi-train-aug-lm-1B-lower-lr | 1 | null | transformers | 28,263 | Entry not found |
RadhikaSatam/CovBert-radhika | ff53193f40e36239316adfe3ec946f827ef74354 | 2021-05-19T11:27:50.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | null | false | RadhikaSatam | null | RadhikaSatam/CovBert-radhika | 1 | null | transformers | 28,264 | Entry not found |
RahulRaman/Kannada-LM-DeBERTa | 1145f7f7dcf75e708424584537adc24dfc4e4047 | 2022-02-04T13:02:51.000Z | [
"pytorch"
] | null | false | RahulRaman | null | RahulRaman/Kannada-LM-DeBERTa | 1 | null | null | 28,265 | Entry not found |
RahulRaman/Kannada-LM-RoBERTa | e7928a945ba98b218bbeecf5030e36d0bbe29281 | 2022-02-04T12:47:49.000Z | [
"pytorch"
] | null | false | RahulRaman | null | RahulRaman/Kannada-LM-RoBERTa | 1 | null | null | 28,266 | Entry not found |
RahulRaman/Malayalam-LM-DeBERTa | 1b6d5cbce2e52bb3977372bdcd0539e1e1548ebb | 2022-02-04T13:09:37.000Z | [
"pytorch"
] | null | false | RahulRaman | null | RahulRaman/Malayalam-LM-DeBERTa | 1 | null | null | 28,267 | Entry not found |
RahulRaman/Tamil-LM-DeBERTa | 34a5afb1e3de4e83c013dee853e6efcc3ae12c2b | 2022-02-04T13:04:58.000Z | [
"pytorch"
] | null | false | RahulRaman | null | RahulRaman/Tamil-LM-DeBERTa | 1 | null | null | 28,268 | Entry not found |
RahulRaman/Tamil-LM-Electra | 432ccb0ff4f1e8c07190c9516556be6c8407aee2 | 2022-01-25T13:20:22.000Z | [
"pytorch"
] | null | false | RahulRaman | null | RahulRaman/Tamil-LM-Electra | 1 | null | null | 28,269 | Entry not found |
RahulRaman/Tamil-LM-RoBERTa | a81772a29e44370a38dbc3d8a46b14f8215bec7c | 2022-02-04T12:54:55.000Z | [
"pytorch"
] | null | false | RahulRaman | null | RahulRaman/Tamil-LM-RoBERTa | 1 | null | null | 28,270 | Entry not found |
RahulRaman/Telugu-LM-DeBERTa | e22d4ec4b0fecbfc4eed8e4dc20524aa465205cc | 2022-02-04T13:06:49.000Z | [
"pytorch"
] | null | false | RahulRaman | null | RahulRaman/Telugu-LM-DeBERTa | 1 | null | null | 28,271 | Entry not found |
RahulRaman/Telugu-LM-Electra | 852e4dd91bf835fcb75fc8f82f83a20c61d6c08b | 2022-02-02T10:01:47.000Z | [
"pytorch"
] | null | false | RahulRaman | null | RahulRaman/Telugu-LM-Electra | 1 | null | null | 28,272 | Entry not found |
RahulRaman/Telugu-LM-RoBERTa | f46c7a50c76e44f20bad091066c4585d263e89c2 | 2022-02-04T12:56:35.000Z | [
"pytorch"
] | null | false | RahulRaman | null | RahulRaman/Telugu-LM-RoBERTa | 1 | null | null | 28,273 | Entry not found |
RahuramThiagarajan/rass | 0362a9b3fa43511e5cd63ccdbe8f1f22baa9983b | 2021-11-11T02:46:12.000Z | [
"pytorch"
] | null | false | RahuramThiagarajan | null | RahuramThiagarajan/rass | 1 | null | null | 28,274 | Entry not found |
Rashid11/DialoGPT-small-rick | 024ec69d612950a987bbd22d8063e4198923cdd6 | 2021-09-18T10:28:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Rashid11 | null | Rashid11/DialoGPT-small-rick | 1 | null | transformers | 28,275 | ---
tags:
- conversational
---
# Rick Morty DialoGPT Model |
Rathod/DialoGPT-small-harrypotter | 0da1e45a2cbf5b205d7061c3ff7283609b751644 | 2021-09-28T08:00:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Rathod | null | Rathod/DialoGPT-small-harrypotter | 1 | null | transformers | 28,276 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
Rattana/wav2vec2-thai-colab | 593327431459b1d91987e3cd3057500e4b2acdda | 2022-02-22T08:32:55.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Rattana | null | Rattana/wav2vec2-thai-colab | 1 | null | transformers | 28,277 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-thai-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-thai-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Recognai/veganuary_ner | c2dd32d9a9be30c9c7c608029f212b0f4cce641e | 2022-02-07T13:29:52.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Recognai | null | Recognai/veganuary_ner | 1 | null | transformers | 28,278 | Entry not found |
RishabhRawatt/DialoGPT-small-kela | b2d32639120173d97e645e301f4aa65b6a725cb6 | 2021-09-05T16:34:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | RishabhRawatt | null | RishabhRawatt/DialoGPT-small-kela | 1 | null | transformers | 28,279 | ---
tags:
- conversational
---
# Kela DialoGPT Model |
RizqFarIDN/DialoGPT-small-harrypotter | 8f35eacaedec1d8f2c4657de054486417fc49b03 | 2021-11-25T02:58:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | RizqFarIDN | null | RizqFarIDN/DialoGPT-small-harrypotter | 1 | null | transformers | 28,280 | ---
tags:
- conversational
---
#harry potter DialoGPT model |
RollingMuffin/scripts_ru | 33a1aed22c1949332f204a13e2c276924d7354ab | 2022-02-23T16:18:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | RollingMuffin | null | RollingMuffin/scripts_ru | 1 | null | transformers | 28,281 | Entry not found |
S34NtheGuy/DialoGPT-medium-Glass_Of_Water | 8455d6e8083b6776feb898f370041c91951eddcc | 2021-10-14T12:28:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | S34NtheGuy | null | S34NtheGuy/DialoGPT-medium-Glass_Of_Water | 1 | null | transformers | 28,282 | ---
tags:
- conversational
---
# DialoGPT chat bot model using discord messages as data |
S34NtheGuy/DialoGPT-medium-Mona | ced64c621a7f450299cb532d334a9295062a77ad | 2021-12-14T18:49:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | S34NtheGuy | null | S34NtheGuy/DialoGPT-medium-Mona | 1 | null | transformers | 28,283 | ---
tags:
- conversational
---
# DialoGPT chat bot model using discord messages as data |
S34NtheGuy/DialoGPT-small-Harry282 | 8f7256ef286bafb40a93c8e83cc1ca9d0eefe9d4 | 2021-10-12T17:21:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | S34NtheGuy | null | S34NtheGuy/DialoGPT-small-Harry282 | 1 | null | transformers | 28,284 | ---
tags:
- conversational
---
# DialoGPT chat bot model using discord messages as data |
S34NtheGuy/DialoGPT-small-pikamew362 | cdc44847da3979914250c30083341a8f9cfcc23d | 2021-10-14T02:01:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | S34NtheGuy | null | S34NtheGuy/DialoGPT-small-pikamew362 | 1 | null | transformers | 28,285 | ---
tags:
- conversational
---
# DialoGPT chat bot model using discord messages as data |
SEBIS/legal_t5_small_cls_finetuned_cs | d4b756259e074487376a05e39a8096ea6c9231be | 2021-06-23T10:29:51.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_finetuned_cs | 1 | null | transformers | 28,286 | Entry not found |
SEBIS/legal_t5_small_cls_finetuned_en | 4ac3cb8c38efb63c92d6004f2c5c6f00ea5ab99a | 2021-06-23T10:31:51.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_finetuned_en | 1 | null | transformers | 28,287 | Entry not found |
SEBIS/legal_t5_small_cls_finetuned_es | 30ef1b1fcd3bd903420e6c38dd0a2bc68c4014d8 | 2021-06-23T10:32:45.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_finetuned_es | 1 | null | transformers | 28,288 | Entry not found |
SEBIS/legal_t5_small_cls_finetuned_it | 9c2d67e27d5a442445f39dda3f5cac054a0539f4 | 2021-06-23T10:34:35.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_finetuned_it | 1 | null | transformers | 28,289 | Entry not found |
SEBIS/legal_t5_small_cls_it | 05fef21cf0332b0319d067df96bd7531a663fbb2 | 2021-06-23T10:36:37.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian",
"dataset:jrc-acquis",
"transformers",
"classification Italian model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_it | 1 | null | transformers | 28,290 |
---
language: Italian
tags:
- classification Italian model
datasets:
- jrc-acquis
widget:
- text: "Regolamento (CE) n. 435/2005 della Commissione del 17 marzo 2005 relativo all'applicazione di un coefficiente di riduzione ai certificati di restituzione per le merci non comprese nell'allegato I del trattato come statuito all'articolo 8, paragrafo 5, del regolamento (CE) n. 1520/2000 LA COMMISSIONE DELLE COMUNITÀ EUROPEE, visto il trattato che istituisce la Comunità europea, visto il regolamento (CE) n. 3448/93 del Consiglio, del 6 dicembre 1993, sul regime di scambi per talune merci ottenute dalla trasformazione di prodotti agricoli [1], visto il regolamento (CE) n. 1520/2000 della Commissione, del 13 luglio 2000, che stabilisce, per taluni prodotti agricoli esportati sotto forma di merci non comprese nell'allegato I del trattato, le modalità comuni di applicazione relative al versamento delle restituzioni all'esportazione e i criteri per stabilirne l'importo [2], in particolare l'articolo 8, paragrafo 5, considerando quanto segue: (1) Dalle comunicazioni degli Stati membri di cui all'articolo 8, paragrafo 2, del regolamento (CE) n. 1520/2000 si evince che l'importo totale delle domande ricevute ammonta a 178002906 EUR, mentre l'importo disponibile per la tranche di titoli di restituzione di cui all'articolo 8, paragrafo 4, del regolamento (CE) n. 1520/2000 ammonta a 68116869 EUR. (2) Un coefficiente di riduzione è calcolato sulla base dell'articolo 8, paragrafi 3 e 4, del regolamento (CE) n. 1520/2000. Siffatto coefficiente dovrebbe pertanto essere applicato agli importi richiesti sotto forma di certificati di restituzione per il periodo dal 1o aprile 2005 come stabilito all'articolo 8, paragrafo 6, del regolamento (CE) n. 1520/2000, HA ADOTTATO IL PRESENTE REGOLAMENTO: Articolo 1 Gli importi delle domande di certificati di restituzione per il periodo dal 1o aprile 2005 sono soggetti a un coefficiente di riduzione pari a 0,618. Articolo 2 Il presente regolamento entra in vigore il 18 marzo 2005. Il presente regolamento è obbligatorio in tutti i suoi elementi e direttamente applicabile in ciascuno degli Stati membri. Fatto a Bruxelles, il 17 marzo 2005. Per la Commissione Günter Verheugen Vicepresidente [1] GU L 318 del 20.12.1993, pag. 18. Regolamento modificato da ultimo dal regolamento (CE) n. 2580/2000 (GU L 298 del 25.11.2000, pag. 5). [2] GU L 177 del 15.7.2000, pag. 1. Regolamento modificato da ultimo dal regolamento (CE) n. 886/2004 (GU L 168 del 1.5.2004, pag. 14). --------------------------------------------------"
---
# legal_t5_small_cls_it model
Model for classification of legal text written in Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_cls_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for classification of legal texts written in Italian.
### How to use
Here is how to use this model to classify legal text written in Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Regolamento (CE) n. 435/2005 della Commissione del 17 marzo 2005 relativo all'applicazione di un coefficiente di riduzione ai certificati di restituzione per le merci non comprese nell'allegato I del trattato come statuito all'articolo 8, paragrafo 5, del regolamento (CE) n. 1520/2000 LA COMMISSIONE DELLE COMUNITÀ EUROPEE, visto il trattato che istituisce la Comunità europea, visto il regolamento (CE) n. 3448/93 del Consiglio, del 6 dicembre 1993, sul regime di scambi per talune merci ottenute dalla trasformazione di prodotti agricoli [1], visto il regolamento (CE) n. 1520/2000 della Commissione, del 13 luglio 2000, che stabilisce, per taluni prodotti agricoli esportati sotto forma di merci non comprese nell'allegato I del trattato, le modalità comuni di applicazione relative al versamento delle restituzioni all'esportazione e i criteri per stabilirne l'importo [2], in particolare l'articolo 8, paragrafo 5, considerando quanto segue: (1) Dalle comunicazioni degli Stati membri di cui all'articolo 8, paragrafo 2, del regolamento (CE) n. 1520/2000 si evince che l'importo totale delle domande ricevute ammonta a 178002906 EUR, mentre l'importo disponibile per la tranche di titoli di restituzione di cui all'articolo 8, paragrafo 4, del regolamento (CE) n. 1520/2000 ammonta a 68116869 EUR. (2) Un coefficiente di riduzione è calcolato sulla base dell'articolo 8, paragrafi 3 e 4, del regolamento (CE) n. 1520/2000. Siffatto coefficiente dovrebbe pertanto essere applicato agli importi richiesti sotto forma di certificati di restituzione per il periodo dal 1o aprile 2005 come stabilito all'articolo 8, paragrafo 6, del regolamento (CE) n. 1520/2000, HA ADOTTATO IL PRESENTE REGOLAMENTO: Articolo 1 Gli importi delle domande di certificati di restituzione per il periodo dal 1o aprile 2005 sono soggetti a un coefficiente di riduzione pari a 0,618. Articolo 2 Il presente regolamento entra in vigore il 18 marzo 2005. Il presente regolamento è obbligatorio in tutti i suoi elementi e direttamente applicabile in ciascuno degli Stati membri. Fatto a Bruxelles, il 17 marzo 2005. Per la Commissione Günter Verheugen Vicepresidente [1] GU L 318 del 20.12.1993, pag. 18. Regolamento modificato da ultimo dal regolamento (CE) n. 2580/2000 (GU L 298 del 25.11.2000, pag. 5). [2] GU L 177 del 15.7.2000, pag. 1. Regolamento modificato da ultimo dal regolamento (CE) n. 886/2004 (GU L 168 del 1.5.2004, pag. 14). --------------------------------------------------"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_cls_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | F1 score |
|:-----:|:-----:|
| legal_t5_small_cls_it | 0.6296|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_cls_multitask_fr | be583ff3d06b03a7512bcc8699e17cd0a5049f55 | 2021-06-23T10:43:14.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_cls_multitask_fr | 1 | null | transformers | 28,291 | Entry not found |
SEBIS/legal_t5_small_finetuned_summ_cs | 14f987012ab08369e3133b2a18df25da95d18589 | 2021-06-23T10:46:26.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_finetuned_summ_cs | 1 | null | transformers | 28,292 | Entry not found |
SEBIS/legal_t5_small_finetuned_summ_en | 986b807c41dc291d0ffa7ed420e3a53ffb83d521 | 2021-06-23T10:47:37.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_finetuned_summ_en | 1 | null | transformers | 28,293 | Entry not found |
SEBIS/legal_t5_small_finetuned_summ_fr | 4f05b33f7470565ccd8f89a55ca4c0f41a86717a | 2021-06-23T10:48:49.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_finetuned_summ_fr | 1 | null | transformers | 28,294 | Entry not found |
SEBIS/legal_t5_small_finetuned_summ_it | dcd4eac9dc4374f5a594cac27748688c10c1e0e7 | 2021-06-23T10:49:23.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_finetuned_summ_it | 1 | null | transformers | 28,295 | Entry not found |
SEBIS/legal_t5_small_multitask_cs_it | be6bc2affe928ef3d05b6730c187edd35a924cc1 | 2021-06-23T10:53:09.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Cszech Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Italian model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_cs_it | 1 | null | transformers | 28,296 |
---
language: Cszech Italian
tags:
- translation Cszech Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Příprava Evropské rady (29.-30. října 2009)"
---
# legal_t5_small_multitask_cs_it model
Model on translating legal text from Cszech to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_cs_it model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Italian.
### How to use
Here is how to use this model to translate legal text from Cszech to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Příprava Evropské rady (29.-30. října 2009)"
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_cs_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_cs_it | 45.297|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_cs_sv | 6836d9c5e09a1b2daf6857fcd18bdddeb2727081 | 2021-06-23T10:53:46.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Cszech Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Cszech Swedish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_cs_sv | 1 | null | transformers | 28,297 |
---
language: Cszech Swedish
tags:
- translation Cszech Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Hračky určené pro častý kontakt s kůží obsahující alergenní látky jiné než vonné, které jsou známé vyvoláváním vážných nebo dokonce osudných účinků na zdraví dětí (například látky, které mohou vyvolat anafylaktický šok), musí být v souladu s ustanoveními týkajícími se označování uvedenými ve směrnici Komise 2006/125/ES ze dne 5. prosince 2006 o obilných a ostatních příkrmech pro kojence a malé děti."
---
# legal_t5_small_multitask_cs_sv model
Model on translating legal text from Cszech to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_cs_sv model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Cszech to Swedish.
### How to use
Here is how to use this model to translate legal text from Cszech to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
cs_text = "Hračky určené pro častý kontakt s kůží obsahující alergenní látky jiné než vonné, které jsou známé vyvoláváním vážných nebo dokonce osudných účinků na zdraví dětí (například látky, které mohou vyvolat anafylaktický šok), musí být v souladu s ustanoveními týkajícími se označování uvedenými ve směrnici Komise 2006/125/ES ze dne 5. prosince 2006 o obilných a ostatních příkrmech pro kojence a malé děti."
pipeline([cs_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_cs_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_cs_sv | 35.871|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_de_fr | e55bc95cec341a509e72a68b912a8fee29767850 | 2021-06-23T10:55:38.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch French",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch French model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_de_fr | 1 | null | transformers | 28,298 |
---
language: Deustch French
tags:
- translation Deustch French model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Wegen einer in Ausübung ihres Amtes erfolgten Äußerung oder Abstimmung dürfen Mitglieder des Europäischen Parlaments weder in ein Ermittlungsverfahren verwickelt noch festgenommen oder verfolgt werden."
---
# legal_t5_small_multitask_de_fr model
Model on translating legal text from Deustch to French. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_de_fr model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to French.
### How to use
Here is how to use this model to translate legal text from Deustch to French in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_de_fr"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_de_fr", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Wegen einer in Ausübung ihres Amtes erfolgten Äußerung oder Abstimmung dürfen Mitglieder des Europäischen Parlaments weder in ein Ermittlungsverfahren verwickelt noch festgenommen oder verfolgt werden."
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_de_fr model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_de_fr | 41.003|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_multitask_en_de | bdea191a5fb1514f6a9d92f3b2805a3d9224a9a6 | 2021-06-23T10:58:16.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"English Deustch",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Deustch model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_en_de | 1 | null | transformers | 28,299 |
---
language: English Deustch
tags:
- translation English Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Reiterates its call on the Commission to submit a proposal to the Parliament and Council as soon as possible in order to ensure that bunker oil for engine fuel in new ships is stored in safer, double-hull tanks since freight or container ships often contain heavy fuel as engine fuel in their bunkers the quantity of which may considerably exceed the cargoes of smaller oil tankers; considers that, before submitting such a proposal, the Commission should ascertain whether or not the existing IMO rules laid down in Resolution MEPC.141(54) are sufficient to guarantee the safe transport of bunker oil used as fuel;"
---
# legal_t5_small_multitask_en_de model
Model on translating legal text from English to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_en_de model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Deustch.
### How to use
Here is how to use this model to translate legal text from English to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Reiterates its call on the Commission to submit a proposal to the Parliament and Council as soon as possible in order to ensure that bunker oil for engine fuel in new ships is stored in safer, double-hull tanks since freight or container ships often contain heavy fuel as engine fuel in their bunkers the quantity of which may considerably exceed the cargoes of smaller oil tankers; considers that, before submitting such a proposal, the Commission should ascertain whether or not the existing IMO rules laid down in Resolution MEPC.141(54) are sufficient to guarantee the safe transport of bunker oil used as fuel;"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_en_de model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_en_de | 41.337|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.