modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Gantenbein/ADDI-IT-XLM-R | 8d4f6d0740f0e84d0eae8eb2db9827b1a1964f86 | 2021-06-01T14:24:52.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Gantenbein | null | Gantenbein/ADDI-IT-XLM-R | 1 | null | transformers | 28,000 | Entry not found |
Gayathri/distilbert-base-uncased-finetuned-squad | 122ce3e48ce5930049916267a7869fea3085e0b1 | 2021-10-19T20:36:57.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Gayathri | null | Gayathri/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 28,001 | Entry not found |
Geotrend/bert-base-en-es-zh-cased | dd304cb33d875efd464a610ff175c4e47ef1166b | 2021-05-18T19:13:08.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-es-zh-cased | 1 | null | transformers | 28,002 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-es-zh-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-es-zh-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-es-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-en-uk-cased | dff56ad31b0f69e898476326eaf204c28d434629 | 2021-05-18T19:49:13.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-uk-cased | 1 | null | transformers | 28,003 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-uk-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-uk-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-uk-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-es-zh-cased | 4e9dd372190a093a13a8f1fe1452f120dcff2f5a | 2021-07-29T11:41:00.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-es-zh-cased | 1 | null | transformers | 28,004 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-es-zh-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-es-zh-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-es-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-fr-da-ja-vi-cased | f911d3100b79e11a98f297968457f136aa2ab779 | 2021-07-27T16:28:47.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-fr-da-ja-vi-cased | 1 | null | transformers | 28,005 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-fr-da-ja-vi-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-fr-da-ja-vi-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-fr-da-ja-vi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-fr-de-no-da-cased | 966ca694f8e3d5696a2840e070324403b1458fe9 | 2021-07-28T07:52:56.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-fr-de-no-da-cased | 1 | null | transformers | 28,006 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-fr-de-no-da-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-fr-de-no-da-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-fr-de-no-da-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-fr-zh-ja-vi-cased | 070841e12448f54da5f91900a40a39fc2ba48a90 | 2021-07-27T15:01:32.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-fr-zh-ja-vi-cased | 1 | null | transformers | 28,007 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-fr-zh-ja-vi-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-fr-zh-ja-vi-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-fr-zh-ja-vi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-hi-cased | 4a1ce1871cc70d80f67ab45da5bc686865e272d6 | 2021-08-16T13:57:40.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-hi-cased | 1 | null | transformers | 28,008 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-hi-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-hi-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-hi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-lt-cased | 766965f3802cd987e004b8428ac705828d49d2f8 | 2021-07-27T18:28:58.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-lt-cased | 1 | null | transformers | 28,009 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-lt-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-lt-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-lt-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-no-cased | 8e11d417dd5d88f0d7d1bbd074f206cb5daca94e | 2021-07-27T09:27:31.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-no-cased | 1 | null | transformers | 28,010 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-no-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-no-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-no-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-th-cased | 2662b3afa2aeef4c8568a4da706758833444528a | 2021-08-16T13:47:56.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-th-cased | 1 | null | transformers | 28,011 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-th-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-th-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-th-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-tr-cased | 96654f2964f1c9ff9ab2debf5fb2e5dd9192c601 | 2021-08-16T14:05:00.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-tr-cased | 1 | null | transformers | 28,012 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-tr-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-tr-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-tr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-ur-cased | 45e5aa7aa90f8bf6cb47a20e1617358d2ddc0ec9 | 2021-08-16T14:03:37.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-ur-cased | 1 | null | transformers | 28,013 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-ur-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-ur-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-ur-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
GleamEyeBeast/Mandarin | f5d41759ff14f770ea5db2c7244140144b588162 | 2022-02-07T04:25:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | GleamEyeBeast | null | GleamEyeBeast/Mandarin | 1 | null | transformers | 28,014 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: Mandarin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mandarin
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
GleamEyeBeast/Mandarin_char | 923e318a188d4b7de42bb331ca379f1814e5a2bc | 2022-02-16T07:07:54.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | GleamEyeBeast | null | GleamEyeBeast/Mandarin_char | 1 | null | transformers | 28,015 | Entry not found |
GleamEyeBeast/Mandarin_naive | 8ae059209b6f8fa168d5913a5cd9835c9f05c002 | 2022-02-15T13:44:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | GleamEyeBeast | null | GleamEyeBeast/Mandarin_naive | 1 | null | transformers | 28,016 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: Mandarin_naive
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mandarin_naive
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4584
- Wer: 0.3999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8963 | 3.67 | 400 | 1.0645 | 0.8783 |
| 0.5506 | 7.34 | 800 | 0.5032 | 0.5389 |
| 0.2111 | 11.01 | 1200 | 0.4765 | 0.4712 |
| 0.1336 | 14.68 | 1600 | 0.4815 | 0.4511 |
| 0.0974 | 18.35 | 2000 | 0.4956 | 0.4370 |
| 0.0748 | 22.02 | 2400 | 0.4881 | 0.4235 |
| 0.0584 | 25.69 | 2800 | 0.4732 | 0.4193 |
| 0.0458 | 29.36 | 3200 | 0.4584 | 0.3999 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
GleamEyeBeast/test | 476bf111d45c570fb964875f09530ea0c617b5c5 | 2022-01-26T04:38:42.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | GleamEyeBeast | null | GleamEyeBeast/test | 1 | null | transformers | 28,017 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1761
- Wer: 0.2161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.5828 | 4.0 | 500 | 3.0263 | 1.0 |
| 1.8657 | 8.0 | 1000 | 0.2213 | 0.2650 |
| 0.332 | 12.0 | 1500 | 0.2095 | 0.2413 |
| 0.2037 | 16.0 | 2000 | 0.1906 | 0.2222 |
| 0.1282 | 20.0 | 2500 | 0.1761 | 0.2161 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
GnomeX/mt5-small-finetuned-amazon-en-es | a23a4224234abe7df78017ec08362dddaba98738 | 2021-12-07T02:44:47.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | GnomeX | null | GnomeX/mt5-small-finetuned-amazon-en-es | 1 | null | transformers | 28,018 | Entry not found |
Greysan/DialoGPT-medium-TOH | 3b696c69b2d04c3fcd9485f470450053814831ea | 2021-08-27T08:53:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Greysan | null | Greysan/DialoGPT-medium-TOH | 1 | null | transformers | 28,019 | ---
tags:
- conversational
---
# The Owl House DialoGPT Model |
GrumpyFinch/DialoGPT-large-rick2 | 8ead8499550c05865c82461ace7bed965f899750 | 2021-09-11T07:05:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | GrumpyFinch | null | GrumpyFinch/DialoGPT-large-rick2 | 1 | null | transformers | 28,020 | ---
tags:
- conversational
---
# Rick Dialo GPT Model |
Guard-SK/DialoGPT-medium-ricksanchez | 8a2ff4672e7e5721ce715e5d5ffddde07331cd83 | 2021-09-01T11:09:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Guard-SK | null | Guard-SK/DialoGPT-medium-ricksanchez | 1 | null | transformers | 28,021 | ---
tags:
- conversational
---
# Rick Sanchez DialoGPT Model |
Guard-SK/DialoGPT-small-ricksanchez | 2637d0dae9e8eeb945193d00366366dadb830e5c | 2021-08-31T21:10:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Guard-SK | null | Guard-SK/DialoGPT-small-ricksanchez | 1 | null | transformers | 28,022 | ---
tags:
- conversational
---
#Rick Sanchez DialoGPT Model |
GusNicho/roberta-base-finetuned | 17390ee07d00fc763d1a280432eaaadf2d4df1a6 | 2022-01-12T08:31:17.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | GusNicho | null | GusNicho/roberta-base-finetuned | 1 | null | transformers | 28,023 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.4057
- eval_runtime: 3.7087
- eval_samples_per_second: 167.712
- eval_steps_per_second: 2.696
- epoch: 2.11
- step: 2053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Hadron/DialoGPT-medium-nino | ed521ace99688b86aba15a7693dde959bcd38534 | 2021-06-04T20:30:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Hadron | null | Hadron/DialoGPT-medium-nino | 1 | null | transformers | 28,024 | ---
tags:
- conversational
---
# My Awesome Model |
Hamas/DialoGPT-large-jake3 | 107aad35b7415748baaf3eefb69be8c8fddbfa11 | 2021-09-30T19:12:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Hamas | null | Hamas/DialoGPT-large-jake3 | 1 | null | transformers | 28,025 | ---
tags:
- conversational
---
# Jake DialoGPT-large-jake
|
HarryPuttar/HarryPotterDC | 66c824ccf06515d948d5402961e6a1128885a9b8 | 2022-01-28T16:39:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | HarryPuttar | null | HarryPuttar/HarryPotterDC | 1 | null | transformers | 28,026 | ---
tags:
- conversational
---
# Harry Potter DailogGPT Model |
Harshal6927/Jack_Sparrow_GPT | e1b8d8a3cdb4a09375a8ddac5880e5bfed35bee2 | 2021-08-29T15:30:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Harshal6927 | null | Harshal6927/Jack_Sparrow_GPT | 1 | null | transformers | 28,027 | ---
tags:
- conversational
---
# Jack Sparrow GPT |
Harshal6927/Tony_Stark_GPT | adaee44120d9520e143b52e8ef642cc7b8998b67 | 2021-08-29T07:39:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Harshal6927 | null | Harshal6927/Tony_Stark_GPT | 1 | null | transformers | 28,028 | ---
tags:
- conversational
---
# Tony Stark GPT
My first AI model still learning, used small dataset so don't expect much |
Harveenchadha/vakyansh_hindi_base_pretrained | 76b4384e54e25f9f5b2629574fb536f68e8ff891 | 2022-03-23T18:33:38.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"hi",
"arxiv:2107.07402",
"transformers",
"hf-asr-leaderboard",
"model_for_talk",
"pretrained",
"robust-speech-event",
"speech",
"license:apache-2.0"
] | null | false | Harveenchadha | null | Harveenchadha/vakyansh_hindi_base_pretrained | 1 | 1 | transformers | 28,029 | ---
language: hi
tags:
- hf-asr-leaderboard
- hi
- model_for_talk
- pretrained
- robust-speech-event
- speech
license: apache-2.0
---
Hindi Pretrained model on 4200 hours. [Link](https://arxiv.org/abs/2107.07402) |
HashireSoriYo/Lelouch_Chatbot | e49a200eced6849efcae2c932ab942bdf5b09866 | 2021-09-28T15:23:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | HashireSoriYo | null | HashireSoriYo/Lelouch_Chatbot | 1 | null | transformers | 28,030 | ---
tags:
- conversational
---
#All hail Lelouch |
Helsinki-NLP/opus-mt-el-eo | 9d88bc79fce33f9ac7dad25b9859d042705c9ef2 | 2021-01-18T08:03:59.000Z | [
"pytorch",
"marian",
"text2text-generation",
"el",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-el-eo | 1 | null | transformers | 28,031 | ---
language:
- el
- eo
tags:
- translation
license: apache-2.0
---
### ell-epo
* source group: Modern Greek (1453-)
* target group: Esperanto
* OPUS readme: [ell-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-epo/README.md)
* model: transformer-align
* source language(s): ell
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ell.epo | 32.4 | 0.517 |
### System Info:
- hf_name: ell-epo
- source_languages: ell
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['el', 'eo']
- src_constituents: {'ell'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.test.txt
- src_alpha3: ell
- tgt_alpha3: epo
- short_pair: el-eo
- chrF2_score: 0.517
- bleu: 32.4
- brevity_penalty: 0.9790000000000001
- ref_len: 3807.0
- src_name: Modern Greek (1453-)
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: el
- tgt_alpha2: eo
- prefer_old: False
- long_pair: ell-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-bnt | 6cf36e027b779fff16066d57c0cee1d4cba88d1a | 2021-01-18T08:05:36.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"sn",
"zu",
"rw",
"lg",
"ts",
"ln",
"ny",
"xh",
"rn",
"bnt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-bnt | 1 | null | transformers | 28,032 | ---
language:
- en
- sn
- zu
- rw
- lg
- ts
- ln
- ny
- xh
- rn
- bnt
tags:
- translation
license: apache-2.0
---
### eng-bnt
* source group: English
* target group: Bantu languages
* OPUS readme: [eng-bnt](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bnt/README.md)
* model: transformer
* source language(s): eng
* target language(s): kin lin lug nya run sna swh toi_Latn tso umb xho zul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-kin.eng.kin | 12.5 | 0.519 |
| Tatoeba-test.eng-lin.eng.lin | 1.1 | 0.277 |
| Tatoeba-test.eng-lug.eng.lug | 4.8 | 0.415 |
| Tatoeba-test.eng.multi | 12.1 | 0.449 |
| Tatoeba-test.eng-nya.eng.nya | 22.1 | 0.616 |
| Tatoeba-test.eng-run.eng.run | 13.2 | 0.492 |
| Tatoeba-test.eng-sna.eng.sna | 32.1 | 0.669 |
| Tatoeba-test.eng-swa.eng.swa | 1.7 | 0.180 |
| Tatoeba-test.eng-toi.eng.toi | 10.7 | 0.266 |
| Tatoeba-test.eng-tso.eng.tso | 26.9 | 0.631 |
| Tatoeba-test.eng-umb.eng.umb | 5.2 | 0.295 |
| Tatoeba-test.eng-xho.eng.xho | 22.6 | 0.615 |
| Tatoeba-test.eng-zul.eng.zul | 41.1 | 0.769 |
### System Info:
- hf_name: eng-bnt
- source_languages: eng
- target_languages: bnt
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bnt/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt']
- src_constituents: {'eng'}
- tgt_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi_Latn', 'umb'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.test.txt
- src_alpha3: eng
- tgt_alpha3: bnt
- short_pair: en-bnt
- chrF2_score: 0.449
- bleu: 12.1
- brevity_penalty: 1.0
- ref_len: 9989.0
- src_name: English
- tgt_name: Bantu languages
- train_date: 2020-07-26
- src_alpha2: en
- tgt_alpha2: bnt
- prefer_old: False
- long_pair: eng-bnt
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-sal | fdfe44baddc675f371c596891535f76b118af7d8 | 2021-01-18T08:15:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"sal",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-sal | 1 | null | transformers | 28,033 | ---
language:
- en
- sal
tags:
- translation
license: apache-2.0
---
### eng-sal
* source group: English
* target group: Salishan languages
* OPUS readme: [eng-sal](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sal/README.md)
* model: transformer
* source language(s): eng
* target language(s): shs_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-14.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sal/opus-2020-07-14.zip)
* test set translations: [opus-2020-07-14.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sal/opus-2020-07-14.test.txt)
* test set scores: [opus-2020-07-14.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sal/opus-2020-07-14.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.multi | 32.6 | 0.585 |
| Tatoeba-test.eng.shs | 1.1 | 0.072 |
| Tatoeba-test.eng-shs.eng.shs | 1.2 | 0.065 |
### System Info:
- hf_name: eng-sal
- source_languages: eng
- target_languages: sal
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sal/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'sal']
- src_constituents: {'eng'}
- tgt_constituents: {'shs_Latn'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sal/opus-2020-07-14.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sal/opus-2020-07-14.test.txt
- src_alpha3: eng
- tgt_alpha3: sal
- short_pair: en-sal
- chrF2_score: 0.07200000000000001
- bleu: 1.1
- brevity_penalty: 1.0
- ref_len: 199.0
- src_name: English
- tgt_name: Salishan languages
- train_date: 2020-07-14
- src_alpha2: en
- tgt_alpha2: sal
- prefer_old: False
- long_pair: eng-sal
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-eu-de | d08a98a1081a793069898243eb4dbd5bce86fecc | 2021-01-18T08:31:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"eu",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-eu-de | 1 | 1 | transformers | 28,034 | ---
language:
- eu
- de
tags:
- translation
license: apache-2.0
---
### eus-deu
* source group: Basque
* target group: German
* OPUS readme: [eus-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eus-deu/README.md)
* model: transformer-align
* source language(s): eus
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eus-deu/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eus-deu/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eus-deu/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eus.deu | 36.3 | 0.562 |
### System Info:
- hf_name: eus-deu
- source_languages: eus
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eus-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eu', 'de']
- src_constituents: {'eus'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eus-deu/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eus-deu/opus-2020-06-16.test.txt
- src_alpha3: eus
- tgt_alpha3: deu
- short_pair: eu-de
- chrF2_score: 0.562
- bleu: 36.3
- brevity_penalty: 0.953
- ref_len: 3315.0
- src_name: Basque
- tgt_name: German
- train_date: 2020-06-16
- src_alpha2: eu
- tgt_alpha2: de
- prefer_old: False
- long_pair: eus-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-guw-fr | a3f8b98f0b8e869ed5101d2e24718fc28e9681ba | 2021-09-09T21:59:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"guw",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-guw-fr | 1 | null | transformers | 28,035 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-guw-fr
* source languages: guw
* target languages: fr
* OPUS readme: [guw-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.fr | 29.7 | 0.479 |
|
Helsinki-NLP/opus-mt-ru-eu | b4556820ca38d66b4ab8fe7535b3e156e42b78f1 | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ru",
"eu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ru-eu | 1 | null | transformers | 28,036 | ---
language:
- ru
- eu
tags:
- translation
license: apache-2.0
---
### rus-eus
* source group: Russian
* target group: Basque
* OPUS readme: [rus-eus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-eus/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): eus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-eus/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-eus/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-eus/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.eus | 29.7 | 0.539 |
### System Info:
- hf_name: rus-eus
- source_languages: rus
- target_languages: eus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-eus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'eu']
- src_constituents: {'rus'}
- tgt_constituents: {'eus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-eus/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-eus/opus-2020-06-16.test.txt
- src_alpha3: rus
- tgt_alpha3: eus
- short_pair: ru-eu
- chrF2_score: 0.539
- bleu: 29.7
- brevity_penalty: 0.9440000000000001
- ref_len: 2373.0
- src_name: Russian
- tgt_name: Basque
- train_date: 2020-06-16
- src_alpha2: ru
- tgt_alpha2: eu
- prefer_old: False
- long_pair: rus-eus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-uk-pt | 3523c0b971808c8b0ccebd7ac6a001b8457c7a49 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"uk",
"pt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-uk-pt | 1 | null | transformers | 28,037 | ---
language:
- uk
- pt
tags:
- translation
license: apache-2.0
---
### ukr-por
* source group: Ukrainian
* target group: Portuguese
* OPUS readme: [ukr-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-por/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-por/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-por/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-por/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.por | 38.1 | 0.601 |
### System Info:
- hf_name: ukr-por
- source_languages: ukr
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'pt']
- src_constituents: {'ukr'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-por/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-por/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: por
- short_pair: uk-pt
- chrF2_score: 0.601
- bleu: 38.1
- brevity_penalty: 0.981
- ref_len: 21315.0
- src_name: Ukrainian
- tgt_name: Portuguese
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: pt
- prefer_old: False
- long_pair: ukr-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-zls-zls | d239210ac43b2ee609f3eff49984a9075d96515a | 2020-08-21T14:42:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"hr",
"mk",
"bg",
"sl",
"zls",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zls-zls | 1 | null | transformers | 28,038 | ---
language:
- hr
- mk
- bg
- sl
- zls
tags:
- translation
license: apache-2.0
---
### zls-zls
* source group: South Slavic languages
* target group: South Slavic languages
* OPUS readme: [zls-zls](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-zls/README.md)
* model: transformer
* source language(s): bul mkd srp_Cyrl
* target language(s): bul mkd srp_Cyrl
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zls/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zls/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zls/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul-hbs.bul.hbs | 19.3 | 0.514 |
| Tatoeba-test.bul-mkd.bul.mkd | 31.9 | 0.669 |
| Tatoeba-test.hbs-bul.hbs.bul | 18.0 | 0.636 |
| Tatoeba-test.hbs-mkd.hbs.mkd | 19.4 | 0.322 |
| Tatoeba-test.mkd-bul.mkd.bul | 44.6 | 0.679 |
| Tatoeba-test.mkd-hbs.mkd.hbs | 5.5 | 0.152 |
| Tatoeba-test.multi.multi | 26.5 | 0.563 |
### System Info:
- hf_name: zls-zls
- source_languages: zls
- target_languages: zls
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-zls/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['hr', 'mk', 'bg', 'sl', 'zls']
- src_constituents: {'hrv', 'mkd', 'srp_Latn', 'srp_Cyrl', 'bul_Latn', 'bul', 'bos_Latn', 'slv'}
- tgt_constituents: {'hrv', 'mkd', 'srp_Latn', 'srp_Cyrl', 'bul_Latn', 'bul', 'bos_Latn', 'slv'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zls/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zls/opus-2020-07-27.test.txt
- src_alpha3: zls
- tgt_alpha3: zls
- short_pair: zls-zls
- chrF2_score: 0.563
- bleu: 26.5
- brevity_penalty: 1.0
- ref_len: 58.0
- src_name: South Slavic languages
- tgt_name: South Slavic languages
- train_date: 2020-07-27
- src_alpha2: zls
- tgt_alpha2: zls
- prefer_old: False
- long_pair: zls-zls
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-tatoeba-he-fr | 273a5e82420efe16ff6da209dd4b8bed69e78ded | 2020-12-11T14:18:12.000Z | [
"pytorch",
"marian",
"text2text-generation",
"he",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-tatoeba-he-fr | 1 | null | transformers | 28,039 | ---
language:
- he
- fr
tags:
- translation
license: apache-2.0
---
### he-fr
* source group: Hebrew
* target group: French
* OPUS readme: [heb-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-fra/README.md)
* model: transformer
* source language(s): heb
* target language(s): fra
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.fra | 47.3 | 0.644 |
### System Info:
- hf_name: he-fr
- source_languages: heb
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'fr']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('French', {'fra'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-fra
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.test.txt
- src_alpha3: heb
- tgt_alpha3: fra
- chrF2_score: 0.644
- bleu: 47.3
- brevity_penalty: 0.9740000000000001
- ref_len: 26123.0
- src_name: Hebrew
- tgt_name: French
- train_date: 2020-12-10 00:00:00
- src_alpha2: he
- tgt_alpha2: fr
- prefer_old: False
- short_pair: he-fr
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-16:03 |
Helsinki-NLP/opus-tatoeba-he-it | b91cac2928be4f0572352b7b6b4e9c17efbefbd8 | 2020-12-11T14:20:59.000Z | [
"pytorch",
"marian",
"text2text-generation",
"he",
"it",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-tatoeba-he-it | 1 | null | transformers | 28,040 | ---
language:
- he
- it
tags:
- translation
license: apache-2.0
---
### he-it
* source group: Hebrew
* target group: Italian
* OPUS readme: [heb-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ita/README.md)
* model: transformer
* source language(s): heb
* target language(s): ita
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.ita | 41.1 | 0.643 |
### System Info:
- hf_name: he-it
- source_languages: heb
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'it']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('Italian', {'ita'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-ita
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.test.txt
- src_alpha3: heb
- tgt_alpha3: ita
- chrF2_score: 0.643
- bleu: 41.1
- brevity_penalty: 0.997
- ref_len: 11464.0
- src_name: Hebrew
- tgt_name: Italian
- train_date: 2020-12-10 00:00:00
- src_alpha2: he
- tgt_alpha2: it
- prefer_old: False
- short_pair: he-it
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-16:01 |
Helsinki-NLP/opus-tatoeba-it-he | b80b664f8167225e608603d75400b0e1798fd7f2 | 2020-12-11T14:25:24.000Z | [
"pytorch",
"marian",
"text2text-generation",
"it",
"he",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-tatoeba-it-he | 1 | null | transformers | 28,041 | ---
language:
- it
- he
tags:
- translation
license: apache-2.0
---
### it-he
* source group: Italian
* target group: Hebrew
* OPUS readme: [ita-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-heb/README.md)
* model: transformer
* source language(s): ita
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ita.heb | 38.5 | 0.593 |
### System Info:
- hf_name: it-he
- source_languages: ita
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'he']
- src_constituents: ('Italian', {'ita'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: ita-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.test.txt
- src_alpha3: ita
- tgt_alpha3: heb
- chrF2_score: 0.593
- bleu: 38.5
- brevity_penalty: 0.985
- ref_len: 9796.0
- src_name: Italian
- tgt_name: Hebrew
- train_date: 2020-12-10 00:00:00
- src_alpha2: it
- tgt_alpha2: he
- prefer_old: False
- short_pair: it-he
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-16:02 |
HenryAI/KerasBERTv1 | 926606ccf88f02751d9784533e78a144c97c83f3 | 2021-12-17T03:20:18.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | HenryAI | null | HenryAI/KerasBERTv1 | 1 | 7 | transformers | 28,042 | Thanks for checking this out! <br />
This video explains the ideas behind KerasBERT (still very much a work in progress)
https://www.youtube.com/watch?v=J3P8WLAELqk |
HieuLV3/QA_UIT_xlm_roberta_large | 86dbe60e37c40c971f21cee3bce1b7ad93f803d1 | 2021-10-18T07:48:26.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | HieuLV3 | null | HieuLV3/QA_UIT_xlm_roberta_large | 1 | null | transformers | 28,043 | Entry not found |
Htenn/DialoGPT-small-spongebob | f94e48816e5eb54bdae169793820ad8675b403b1 | 2022-02-18T09:13:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Htenn | null | Htenn/DialoGPT-small-spongebob | 1 | null | transformers | 28,044 | ---
tags:
- conversational
---
# SpongeBob DialoGPT Model |
HueJanus/DialoGPT-small-ricksanchez | cc030aec9364bb415b0cead09f6880eff7c1885c | 2021-09-25T18:46:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | HueJanus | null | HueJanus/DialoGPT-small-ricksanchez | 1 | null | transformers | 28,045 | ---
tags:
- conversational
---
#Rick Sanchez DiaoloGPT Model |
HungChau/distilbert-base-cased-concept-extraction-wikipedia-v1.0-concept-extraction-iir-v1.0 | 4698f4668410638a47a35e62b16eefbc7171f305 | 2021-11-12T20:54:33.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-wikipedia-v1.0-concept-extraction-iir-v1.0 | 1 | null | transformers | 28,046 | Entry not found |
HungChau/distilbert-base-cased-concept-extraction-wikipedia-v1.0-concept-extraction-iir-v1.3 | 00093eb3d273bb21b9cb029f18e9d1ac265bd4fa | 2021-11-18T03:51:56.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-wikipedia-v1.0-concept-extraction-iir-v1.3 | 1 | null | transformers | 28,047 | Entry not found |
HungChau/distilbert-base-cased-concept-extraction-wikipedia-v1.0 | 4f0052833bb3c3aaa562fb735544e71984d08a31 | 2021-11-12T19:00:24.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-wikipedia-v1.0 | 1 | null | transformers | 28,048 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-iir-v1.0-concept-extraction-truncated-3edbbc | 98c28b5d42ab07c9322033daa1471e7796d59c87 | 2021-11-02T18:53:48.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-iir-v1.0-concept-extraction-truncated-3edbbc | 1 | null | transformers | 28,049 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.0-concept-extracti-truncated-435523 | f876d9494a9d985eff4b4d8864234ca28f63b8ed | 2021-11-02T10:28:50.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.0-concept-extracti-truncated-435523 | 1 | null | transformers | 28,050 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.0-concept-extracti-truncated-7d1e33 | a4486ec94147b607d5fb8d8b5925805226566d9c | 2021-11-02T23:57:47.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.0-concept-extracti-truncated-7d1e33 | 1 | null | transformers | 28,051 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.2 | 0f1d380f406f7374c55811ea3070f601d7b40ac7 | 2021-11-16T14:00:16.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.2 | 1 | null | transformers | 28,052 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.1-concept-extraction-iir-v1.0 | bd42346cd230e8192480a22a8b1d38b95e8e1ecb | 2021-11-12T15:55:53.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.1-concept-extraction-iir-v1.0 | 1 | null | transformers | 28,053 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.2-concept-extraction-iir-v1.2 | 737f93875577e2ebbbf7fcfdb6f47299aaabe8c3 | 2021-11-18T02:44:09.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.2-concept-extraction-iir-v1.2 | 1 | null | transformers | 28,054 | Entry not found |
ILoveThatLady/DialoGPT-small-rickandmorty | 387212ed49fd19750cfa81044cb582c8d627f3a4 | 2021-10-01T21:53:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ILoveThatLady | null | ILoveThatLady/DialoGPT-small-rickandmorty | 1 | null | transformers | 28,055 | ---
tags:
- conversational
---
# Rick And Morty DialoGPT Model |
Icemiser/chat-test | eed77232040dc83568bc7a942d26234a2942b49c | 2021-09-22T02:59:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Icemiser | null | Icemiser/chat-test | 1 | null | transformers | 28,056 | ---
tags:
- conversational
---
# Hank Hill DialoGPT Model |
Ife/BM-FR | 7379d87dc7f22ad44b882d0f435c1decbc2a3283 | 2021-09-16T04:54:56.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ife | null | Ife/BM-FR | 1 | null | transformers | 28,057 | Entry not found |
Ife/CA-ES | 840a97fa77ce6e80129daaee9ac257044c6dc2f1 | 2021-09-16T02:24:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ife | null | Ife/CA-ES | 1 | null | transformers | 28,058 | # Similar-Languages-MT |
Ife/FR-BM | 0c8fe9f5fafafc935a6adfd166f54e7942d952bd | 2021-09-16T04:48:41.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ife | null | Ife/FR-BM | 1 | null | transformers | 28,059 | Entry not found |
Ife/PT-ES | d6a349ebd5c92e6f7163f1104c0b9837065e4bb1 | 2021-09-16T04:32:07.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ife | null | Ife/PT-ES | 1 | null | transformers | 28,060 | Entry not found |
Insun/wav2vec2_large_xlsr_53_VTCK_16K | c796d1e37bdc3a6c87a542435e1309b4c40e906c | 2021-12-04T05:12:48.000Z | [
"pytorch"
] | null | false | Insun | null | Insun/wav2vec2_large_xlsr_53_VTCK_16K | 1 | null | null | 28,061 | Entry not found |
Invincible/Chat_bot-Harrypotter-small | bd316cec24b723a3d4ded1ff1db0be6cf990deb1 | 2021-09-01T13:36:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Invincible | null | Invincible/Chat_bot-Harrypotter-small | 1 | null | transformers | 28,062 | ---
tags:
- conversational
---
#harry potter Model |
Istiaque190515/harry_bot_discord | e8b565bbfddabd98cabe406a16466dbc619d7278 | 2021-09-19T11:47:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Istiaque190515 | null | Istiaque190515/harry_bot_discord | 1 | null | transformers | 28,063 | ---
tags:
- conversational
---
#harry_bot |
ItzJorinoPlays/DialoGPT-small-PickleRick | 095546c43c2423415d2ca1192ed39d620806640a | 2021-08-31T12:18:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ItzJorinoPlays | null | ItzJorinoPlays/DialoGPT-small-PickleRick | 1 | null | transformers | 28,064 | ---
tags:
- conversational
---
# Pickle Rick DialoGPT Model |
Jeevesh8/DA-bert | 3d8f2bedbe688f956f1116daf9c0dfcd495d6dc8 | 2021-11-12T11:43:19.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jeevesh8 | null | Jeevesh8/DA-bert | 1 | null | transformers | 28,065 | Entry not found |
Jeevesh8/sMLM-bert | c1d5c31b2674612eea5549ec05c5cbc0a44781d9 | 2021-11-12T10:28:46.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jeevesh8 | null | Jeevesh8/sMLM-bert | 1 | null | transformers | 28,066 | Entry not found |
Jeska/BertjeWDialData | c957e9c2411fdd85ee75b7b54a1edfade4d6ad63 | 2021-11-16T18:04:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Jeska | null | Jeska/BertjeWDialData | 1 | null | transformers | 28,067 | ---
tags:
- generated_from_trainer
model-index:
- name: BertjeWDialData
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialData
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 297 | 2.2419 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Jeska/BertjeWDialDataALL02 | 86d7e3bb3c0f04d910fd55b8ad8c85acb75f2e14 | 2021-12-15T01:40:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jeska | null | Jeska/BertjeWDialDataALL02 | 1 | null | transformers | 28,068 | Entry not found |
Jeska/BertjeWDialDataALL03 | dab531e6136470b57757c697b267fb7f9220fc9c | 2021-12-16T19:19:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Jeska | null | Jeska/BertjeWDialDataALL03 | 1 | null | transformers | 28,069 | ---
tags:
- generated_from_trainer
model-index:
- name: BertjeWDialDataALL03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALL03
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 8.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1951 | 1.0 | 1542 | 2.0285 |
| 2.0918 | 2.0 | 3084 | 1.9989 |
| 2.0562 | 3.0 | 4626 | 2.0162 |
| 2.0012 | 4.0 | 6168 | 1.9330 |
| 1.9705 | 5.0 | 7710 | 1.9151 |
| 1.9571 | 6.0 | 9252 | 1.9419 |
| 1.9113 | 7.0 | 10794 | 1.9175 |
| 1.8988 | 8.0 | 12336 | 1.9143 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Jeska/BertjeWDialDataALL04 | 84a5bf2afb717c88c4ed493715ff61a8da69255d | 2021-12-22T02:47:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Jeska | null | Jeska/BertjeWDialDataALL04 | 1 | null | transformers | 28,070 | ---
tags:
- generated_from_trainer
model-index:
- name: BertjeWDialDataALL04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALL04
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2954 | 1.0 | 1542 | 2.0372 |
| 2.2015 | 2.0 | 3084 | 2.0104 |
| 2.1661 | 3.0 | 4626 | 2.0372 |
| 2.1186 | 4.0 | 6168 | 1.9549 |
| 2.0939 | 5.0 | 7710 | 1.9438 |
| 2.0867 | 6.0 | 9252 | 1.9648 |
| 2.0462 | 7.0 | 10794 | 1.9465 |
| 2.0315 | 8.0 | 12336 | 1.9412 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Jeska/BertjeWDialDataALLQonly | 4286279b69967d287ef7192511fbd30e547a8b3c | 2021-12-04T21:58:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Jeska | null | Jeska/BertjeWDialDataALLQonly | 1 | null | transformers | 28,071 | ---
tags:
- generated_from_trainer
model-index:
- name: BertjeWDialDataALLQonly
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALLQonly
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2122 | 1.0 | 871 | 2.0469 |
| 2.0961 | 2.0 | 1742 | 2.0117 |
| 2.0628 | 3.0 | 2613 | 2.0040 |
| 2.0173 | 4.0 | 3484 | 1.9901 |
| 1.9772 | 5.0 | 4355 | 1.9711 |
| 1.9455 | 6.0 | 5226 | 1.9785 |
| 1.917 | 7.0 | 6097 | 1.9380 |
| 1.8933 | 8.0 | 6968 | 1.9651 |
| 1.8708 | 9.0 | 7839 | 1.9915 |
| 1.862 | 10.0 | 8710 | 1.9310 |
| 1.8545 | 11.0 | 9581 | 1.9422 |
| 1.8231 | 12.0 | 10452 | 1.9310 |
| 1.8141 | 13.0 | 11323 | 1.9362 |
| 1.7939 | 14.0 | 12194 | 1.9334 |
| 1.8035 | 15.0 | 13065 | 1.9197 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Jeska/BertjeWDialDataALLQonly02 | e4177ab4e305b1131f4e84d797f0f51d695ae6c4 | 2021-12-08T21:40:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Jeska | null | Jeska/BertjeWDialDataALLQonly02 | 1 | null | transformers | 28,072 | ---
tags:
- generated_from_trainer
model-index:
- name: BertjeWDialDataALLQonly02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALLQonly02
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2438 | 1.0 | 871 | 2.1122 |
| 2.1235 | 2.0 | 1742 | 2.0784 |
| 2.0712 | 3.0 | 2613 | 2.0679 |
| 2.0034 | 4.0 | 3484 | 2.0546 |
| 1.9375 | 5.0 | 4355 | 2.0277 |
| 1.8911 | 6.0 | 5226 | 2.0364 |
| 1.8454 | 7.0 | 6097 | 1.9812 |
| 1.808 | 8.0 | 6968 | 2.0175 |
| 1.7716 | 9.0 | 7839 | 2.0286 |
| 1.7519 | 10.0 | 8710 | 1.9653 |
| 1.7358 | 11.0 | 9581 | 1.9817 |
| 1.7084 | 12.0 | 10452 | 1.9633 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Jeska/BertjeWDialDataALLQonly05 | ab17d63b6f4ef33f15834c485e4bf9ab9673e7b7 | 2021-12-10T07:54:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Jeska | null | Jeska/BertjeWDialDataALLQonly05 | 1 | null | transformers | 28,073 | ---
tags:
- generated_from_trainer
model-index:
- name: BertjeWDialDataALLQonly05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALLQonly05
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.9349 | 1.0 | 871 | 2.9642 |
| 2.9261 | 2.0 | 1742 | 2.9243 |
| 2.8409 | 3.0 | 2613 | 2.8895 |
| 2.7308 | 4.0 | 3484 | 2.8394 |
| 2.6042 | 5.0 | 4355 | 2.7703 |
| 2.4671 | 6.0 | 5226 | 2.7522 |
| 2.3481 | 7.0 | 6097 | 2.6339 |
| 2.2493 | 8.0 | 6968 | 2.6224 |
| 2.1233 | 9.0 | 7839 | 2.5637 |
| 2.0194 | 10.0 | 8710 | 2.4896 |
| 1.9178 | 11.0 | 9581 | 2.4689 |
| 1.8588 | 12.0 | 10452 | 2.4663 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Jeska/BertjeWDialDataALLQonly06 | 9310ce567f268d840a800c4d7d84d59ba3febd8e | 2021-12-10T13:02:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jeska | null | Jeska/BertjeWDialDataALLQonly06 | 1 | null | transformers | 28,074 | Entry not found |
Jeska/BertjeWDialDataALLQonly09 | 560b9ab2bed4a4eef6b6ebe38d869142753c1c2a | 2021-12-13T22:05:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Jeska | null | Jeska/BertjeWDialDataALLQonly09 | 1 | null | transformers | 28,075 | ---
tags:
- generated_from_trainer
model-index:
- name: BertjeWDialDataALLQonly09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALLQonly09
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2439 | 1.0 | 871 | 2.1102 |
| 2.1235 | 2.0 | 1742 | 2.0785 |
| 2.0709 | 3.0 | 2613 | 2.0689 |
| 2.0033 | 4.0 | 3484 | 2.0565 |
| 1.9386 | 5.0 | 4355 | 2.0290 |
| 1.8909 | 6.0 | 5226 | 2.0366 |
| 1.8449 | 7.0 | 6097 | 1.9809 |
| 1.8078 | 8.0 | 6968 | 2.0177 |
| 1.7709 | 9.0 | 7839 | 2.0289 |
| 1.7516 | 10.0 | 8710 | 1.9645 |
| 1.7354 | 11.0 | 9581 | 1.9810 |
| 1.7073 | 12.0 | 10452 | 1.9631 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Jisu/HanBART_base | a88b4c8aba41aec3d7b6b31e11c5c7c927cf3f9e | 2021-11-22T08:38:11.000Z | [
"pytorch",
"bart",
"feature-extraction",
"transformers"
] | feature-extraction | false | Jisu | null | Jisu/HanBART_base | 1 | null | transformers | 28,076 | Entry not found |
Jodsa/camembert_mlm | 4893c930b4febfbbecca335edd954705f1f15731 | 2021-05-17T13:06:25.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jodsa | null | Jodsa/camembert_mlm | 1 | null | transformers | 28,077 | Entry not found |
JorgeSarry/est5base | c6cb2ca438340e3916c978751e252118e31dd9a3 | 2021-09-13T12:14:38.000Z | [
"pytorch",
"t5",
"text2text-generation",
"es",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | JorgeSarry | null | JorgeSarry/est5base | 1 | null | transformers | 28,078 | ---
language: es
---
This is a smaller version of the google/mt5-base model with only Spanish and some English embeddings left following the procedure outlined here https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90
The original model has 582M parameters, with 384M of them being input and output embeddings.
After shrinking the sentencepiece vocabulary from 250K to 30K (top 10K English and top 20K Spanish tokens) the number of model parameters reduced to 244M parameters, resulting on a model size reduced from 2.2GB to 0.9GB - 42% of the original one.
|
Julianqll/DialoGPT-small-finalmorty | 5c0b71f0d9b5e546b8b68c8ac0677353400da9b2 | 2021-08-30T18:17:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Julianqll | null | Julianqll/DialoGPT-small-finalmorty | 1 | null | transformers | 28,079 | ---
tags:
- conversational
---
# Morty DialoGPT Model |
Junmai/klue-roberta-large-boolq-finetuned-v1 | fb6aca4d015c24502bc537fd1b645009cb945d47 | 2021-12-08T01:56:14.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | Junmai | null | Junmai/klue-roberta-large-boolq-finetuned-v1 | 1 | null | transformers | 28,080 | Entry not found |
KBLab/electra-small-swedish-cased-discriminator | fe42404c3238a1fccb8e8d8119f3721eb4ac7792 | 2020-10-21T08:17:53.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
] | null | false | KBLab | null | KBLab/electra-small-swedish-cased-discriminator | 1 | null | transformers | 28,081 | Entry not found |
KBLab/asr-voxrex-bart-base | 220091024fa797591aec330bb739898f5ee45980 | 2022-01-10T13:38:13.000Z | [
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"asr_seq2seq"
] | automatic-speech-recognition | false | KBLab | null | KBLab/asr-voxrex-bart-base | 1 | null | transformers | 28,082 | ---
tags:
- automatic-speech-recognition
- generated_from_trainer
- asr_seq2seq
---
Test |
KK/DialoGPT-small-Rick | 229d44bbffbc4e526d994beb3d55d114ef731e43 | 2021-06-11T03:07:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | KK | null | KK/DialoGPT-small-Rick | 1 | null | transformers | 28,083 | Entry not found |
KP2500/KPBot | 873375b41e20122cdfc71bae33e6364b69a95a74 | 2021-08-27T06:53:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | KP2500 | null | KP2500/KPBot | 1 | null | transformers | 28,084 | ---
tags:
- conversational
---
# RickBot built for [Chai](https://chai.ml/)
Make your own [here](https://colab.research.google.com/drive/1o5LxBspm-C28HQvXN-PRQavapDbm5WjG?usp=sharing)
|
KY/KY_test_model | 220a913bef4639589d645e7e824f88d44049e604 | 2021-06-15T08:08:44.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | KY | null | KY/KY_test_model | 1 | null | transformers | 28,085 | Entry not found |
KY/modeling_test_II | dcca4a7868b6da31a19fbe14630cae0d32bb9b5a | 2021-06-17T02:18:08.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | KY | null | KY/modeling_test_II | 1 | null | transformers | 28,086 | Entry not found |
Kairu/DialoGPT-small-Rick | 7fef122236c961764e0680de71b1d29afd2d79af | 2021-11-11T04:23:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Kairu | null | Kairu/DialoGPT-small-Rick | 1 | null | transformers | 28,087 | ---
tags:
- conversational
---
# Rick DialoGPT model |
Kairu/RICKBOT | 79f0f5cb1478925bd65c072845c614fb561999f4 | 2021-11-12T07:50:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Kairu | null | Kairu/RICKBOT | 1 | null | transformers | 28,088 | ---
tags:
- conversational
---
# Rick bot chat |
KaydenSou/Joshua | 38458f6667fd30ab14582dbf6316af416fb70280 | 2021-08-27T04:30:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | KaydenSou | null | KaydenSou/Joshua | 1 | null | transformers | 28,089 | ---
tags :
- conversational
---
# Joshua Dialogue Model |
Khanh/bert-base-multilingual-cased-finetuned-viquad | 26b09b57bb59e3518619326d29ce5b1b120820ad | 2022-01-04T19:07:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Khanh | null | Khanh/bert-base-multilingual-cased-finetuned-viquad | 1 | null | transformers | 28,090 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-finetuned-viquad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-viquad
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 65 | 2.5534 |
| No log | 2.0 | 130 | 2.1165 |
| No log | 3.0 | 195 | 1.9815 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Khanh/distilbert-base-multilingual-cased-finetuned-squad | 934394d83468bba7100a4a8247dc2ffa3b5a3696 | 2022-01-04T15:53:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Khanh | null | Khanh/distilbert-base-multilingual-cased-finetuned-squad | 1 | null | transformers | 28,091 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-multilingual-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.923 | 1.0 | 579 | 0.8439 |
| 0.8479 | 2.0 | 1158 | 0.6784 |
| 0.6148 | 3.0 | 1737 | 0.6587 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Khanh/distilbert-base-multilingual-cased-finetuned-viquad | 0712fbfbf51e6533a01d524c99a53c29cdc7eb09 | 2022-01-04T19:19:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Khanh | null | Khanh/distilbert-base-multilingual-cased-finetuned-viquad | 1 | null | transformers | 28,092 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-multilingual-cased-finetuned-viquad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-viquad
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 65 | 4.0975 |
| No log | 2.0 | 130 | 3.9315 |
| No log | 3.0 | 195 | 3.6742 |
| No log | 4.0 | 260 | 3.4878 |
| No log | 5.0 | 325 | 3.4241 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
KheireddineDaouadi/arsent | 2a3262a158cd070cbca6c486c3131486ce83c648 | 2022-02-09T18:32:42.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | KheireddineDaouadi | null | KheireddineDaouadi/arsent | 1 | null | transformers | 28,093 | Entry not found |
KnutZuidema/DialoGPT-small-morty | 6f6bc58a8b78111c955981296a90bdd66d672353 | 2021-08-31T20:38:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | KnutZuidema | null | KnutZuidema/DialoGPT-small-morty | 1 | null | transformers | 28,094 | ---
tags:
- conversational
---
# MORTY!!! |
Koriyy/DialoGPT-medium-gf | fbed186a8ed2c378db1df14686f6faad7b4aab02 | 2022-02-11T03:57:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Koriyy | null | Koriyy/DialoGPT-medium-gf | 1 | null | transformers | 28,095 | ---
tags:
- conversational
---
I'm dumb |
KrispyIChris/DialoGPT-small-harrypotter | a27e3bd3b2fb82c94f60c70c8857873fa9cea979 | 2021-09-16T02:36:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | KrispyIChris | null | KrispyIChris/DialoGPT-small-harrypotter | 1 | null | transformers | 28,096 | ---
tags:
- conversational
---
# Harry Potter DialoGPTModel |
Krystalan/mdialbart_de | 347b2a7a48a03006923e9a952d1ef08b4fc45d58 | 2022-02-24T11:33:13.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2202.05599",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | Krystalan | null | Krystalan/mdialbart_de | 1 | null | transformers | 28,097 | ---
license: cc-by-nc-sa-4.0
---
## mDialBART: A Cross-Lingual Dialogue Summarization Model
This model is introduced by [*ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization*](https://arxiv.org/abs/2202.05599). |
Kyuyoung11/haremotions-v5 | b53ebee2ca1f540e6f51e5131a65d0bfc84b946d | 2021-09-12T04:32:10.000Z | [
"pytorch",
"electra",
"transformers"
] | null | false | Kyuyoung11 | null | Kyuyoung11/haremotions-v5 | 1 | null | transformers | 28,098 | Entry not found |
LactoseLegend/DialoGPT-small-Rick | 340aaf4e99714990b2e3e5509f35f6d55213d40d | 2021-08-28T21:14:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | LactoseLegend | null | LactoseLegend/DialoGPT-small-Rick | 1 | null | transformers | 28,099 | ---
tags:
- conversational
---
# Rick DioloGPT Model
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.