modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
huggingtweets/nikkihaleyfan93 | huggingtweets | 2021-10-23T22:45:26Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/nikkihaleyfan93/1635029077906/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1329566476987232256/wpiYdhhz_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Richard Smit ๐ฆ
๐ ๐ ๐ฐ ๐ป๐ฆ ๐ณ๐ฑ ๐บ๐ธ ๐ฌ๐ง ๐ฎ๐ฑ</div>
<div style="text-align: center; font-size: 14px;">@nikkihaleyfan93</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Richard Smit ๐ฆ
๐ ๐ ๐ฐ ๐ป๐ฆ ๐ณ๐ฑ ๐บ๐ธ ๐ฌ๐ง ๐ฎ๐ฑ.
| Data | Richard Smit ๐ฆ
๐ ๐ ๐ฐ ๐ป๐ฆ ๐ณ๐ฑ ๐บ๐ธ ๐ฌ๐ง ๐ฎ๐ฑ |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 406 |
| Short tweets | 255 |
| Tweets kept | 2587 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20va5xqa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nikkihaleyfan93's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1v26x5ax) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1v26x5ax/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nikkihaleyfan93')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
espnet/kan-bayashi_ljspeech_tts_finetune_joint_conformer_fastspeech2_hifigan_-truncated-737899 | espnet | 2021-10-23T20:54:27Z | 2 | 1 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- ljspeech
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_tts_finetune_joint_conformer_fastspeech2_hifigan_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
โป๏ธ Imported from https://zenodo.org/record/5498896/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_tsukuyomi_full_band_vits_prosody | espnet | 2021-10-23T20:50:36Z | 2 | 3 | espnet | [
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:tsukuyomi",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: ja
datasets:
- tsukuyomi
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/tsukuyomi_full_band_vits_prosody`
โป๏ธ Imported from https://zenodo.org/record/5521446/
This model was trained by kan-bayashi using tsukuyomi/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_tsukuyomi_tts_finetune_full_band_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest | espnet | 2021-10-23T20:50:21Z | 0 | 3 | espnet | [
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:tsukuyomi",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: ja
datasets:
- tsukuyomi
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/tsukuyomi_tts_finetune_full_band_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest`
โป๏ธ Imported from https://zenodo.org/record/5521446/
This model was trained by kan-bayashi using tsukuyomi/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_jvs_jvs010_vits_prosody | espnet | 2021-10-23T20:49:20Z | 1 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jvs",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: ja
datasets:
- jvs
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/jvs_jvs010_vits_prosody`
โป๏ธ Imported from https://zenodo.org/record/5521494/
This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_jsut_full_band_vits_prosody | espnet | 2021-10-23T20:47:17Z | 11 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: ja
datasets:
- jsut
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_full_band_vits_prosody`
โป๏ธ Imported from https://zenodo.org/record/5521340/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_p-truncated-66d5fc | espnet | 2021-10-23T20:45:49Z | 0 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: ja
datasets:
- jsut
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave`
โป๏ธ Imported from https://zenodo.org/record/5521340/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_vctk_full_band_multi_spk_vits | espnet | 2021-10-23T20:44:14Z | 0 | 1 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- vctk
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/vctk_full_band_multi_spk_vits`
โป๏ธ Imported from https://zenodo.org/record/5521431/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_vctk_tts_train_full_band_multi_spk_vits_raw_phn_tacotron_g-truncated-50b003 | espnet | 2021-10-23T20:43:58Z | 2 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- vctk
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/vctk_tts_train_full_band_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
โป๏ธ Imported from https://zenodo.org/record/5521431/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_vctk_multi_spk_vits | espnet | 2021-10-23T20:42:58Z | 2 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- vctk
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/vctk_multi_spk_vits`
โป๏ธ Imported from https://zenodo.org/record/5500759/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
dkleczek/papuGaPT2-finetuned-wierszyki | dkleczek | 2021-10-23T20:37:11Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
model-index:
- name: papuGaPT2-finetuned-wierszyki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# papuGaPT2-finetuned-wierszyki
This model is a fine-tuned version of [flax-community/papuGaPT2](https://huggingface.co/flax-community/papuGaPT2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 202 | 2.8122 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
espnet/kan-bayashi_vctk_tts_train_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave | espnet | 2021-10-23T20:32:45Z | 1 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- vctk
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/vctk_tts_train_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
โป๏ธ Imported from https://zenodo.org/record/5500759/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave | espnet | 2021-10-23T20:30:29Z | 1 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: ja
datasets:
- jsut
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave`
โป๏ธ Imported from https://zenodo.org/record/5499040/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_jsut_tacotron2_prosody | espnet | 2021-10-23T20:30:13Z | 1 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: ja
datasets:
- jsut
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_tacotron2_prosody`
โป๏ธ Imported from https://zenodo.org/record/5499026/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_csmsc_vits | espnet | 2021-10-23T20:29:44Z | 25 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: zh
datasets:
- csmsc
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/csmsc_vits`
โป๏ธ Imported from https://zenodo.org/record/5499120/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_csmsc_tts_train_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave | espnet | 2021-10-23T20:29:19Z | 2 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: zh
datasets:
- csmsc
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/csmsc_tts_train_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave`
โป๏ธ Imported from https://zenodo.org/record/5499120/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_jvs_jvs001_vits_accent_with_pause | espnet | 2021-10-23T20:25:55Z | 0 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jvs",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: ja
datasets:
- jvs
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/jvs_jvs001_vits_accent_with_pause`
โป๏ธ Imported from https://zenodo.org/record/5432540/
This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjta-truncated-d57a28 | espnet | 2021-10-23T20:25:39Z | 1 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jvs",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: ja
datasets:
- jvs
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_latest`
โป๏ธ Imported from https://zenodo.org/record/5432566/
This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_jsut_vits_accent_with_pause | espnet | 2021-10-23T20:23:56Z | 0 | 3 | espnet | [
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: ja
datasets:
- jsut
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_vits_accent_with_pause`
โป๏ธ Imported from https://zenodo.org/record/5414980/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_a-truncated-d7d5d0 | espnet | 2021-10-23T20:23:41Z | 3 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: ja
datasets:
- jsut
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.total_count.ave`
โป๏ธ Imported from https://zenodo.org/record/5431984/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
huggingtweets/dril-praisegodbarbon | huggingtweets | 2021-10-23T18:50:31Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/dril-praisegodbarbon/1635015027636/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1381764452098437120/74IgKP07_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI CYBORG ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Boston Psychology PhD</div>
<div style="text-align: center; font-size: 14px;">@dril-praisegodbarbon</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Boston Psychology PhD.
| Data | wint | Boston Psychology PhD |
| --- | --- | --- |
| Tweets downloaded | 3226 | 3207 |
| Retweets | 465 | 802 |
| Short tweets | 319 | 266 |
| Tweets kept | 2442 | 2139 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3knldxg0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-praisegodbarbon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3gs5uhsw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3gs5uhsw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-praisegodbarbon')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingartists/enya | huggingartists | 2021-10-23T12:54:20Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/enya",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/enya
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/f43534295450e1b0a276620dffdc3740.379x379x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค HuggingArtists Model ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Enya</div>
<a href="https://genius.com/artists/enya">
<div style="text-align: center; font-size: 14px;">@enya</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Enya.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/enya).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/enya")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/16cuy8yb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Enya's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/il8ldqo8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/il8ldqo8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/enya')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/enya")
model = AutoModelWithLMHead.from_pretrained("huggingartists/enya")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
2umm3r/distilbert-base-uncased-finetuned-cola | 2umm3r | 2021-10-23T11:46:51Z | 21 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5155709926752544
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7816
- Matthews Correlation: 0.5156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5291 | 1.0 | 535 | 0.5027 | 0.4092 |
| 0.3492 | 2.0 | 1070 | 0.5136 | 0.4939 |
| 0.2416 | 3.0 | 1605 | 0.6390 | 0.5056 |
| 0.1794 | 4.0 | 2140 | 0.7816 | 0.5156 |
| 0.1302 | 5.0 | 2675 | 0.8836 | 0.5156 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
tiennvcs/bert-large-uncased-finetuned-infovqa | tiennvcs | 2021-10-23T06:01:27Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-uncased-finetuned-infovqa
results:
- task:
name: Question Answering
type: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-infovqa
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.3170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.7861 | 0.12 | 1000 | 3.2778 |
| 3.2186 | 0.23 | 2000 | 3.0658 |
| 2.8504 | 0.35 | 3000 | 3.0456 |
| 2.8621 | 0.46 | 4000 | 2.8758 |
| 2.7851 | 0.58 | 5000 | 2.8680 |
| 2.8016 | 0.69 | 6000 | 2.9244 |
| 2.7592 | 0.81 | 7000 | 2.7735 |
| 2.5737 | 0.93 | 8000 | 2.7640 |
| 2.3493 | 1.04 | 9000 | 2.7257 |
| 2.1041 | 1.16 | 10000 | 2.8442 |
| 2.1713 | 1.27 | 11000 | 2.7723 |
| 2.0594 | 1.39 | 12000 | 2.9982 |
| 2.1825 | 1.5 | 13000 | 2.8272 |
| 2.2486 | 1.62 | 14000 | 2.8897 |
| 2.097 | 1.74 | 15000 | 2.8557 |
| 2.1645 | 1.85 | 16000 | 2.6342 |
| 2.15 | 1.97 | 17000 | 2.8680 |
| 1.5662 | 2.08 | 18000 | 3.2126 |
| 1.6168 | 2.2 | 19000 | 3.1646 |
| 1.5886 | 2.32 | 20000 | 3.3139 |
| 1.6539 | 2.43 | 21000 | 3.2610 |
| 1.6486 | 2.55 | 22000 | 3.3144 |
| 1.637 | 2.66 | 23000 | 3.0437 |
| 1.7186 | 2.78 | 24000 | 2.9936 |
| 1.7543 | 2.89 | 25000 | 3.1641 |
| 1.5301 | 3.01 | 26000 | 4.0560 |
| 1.1436 | 3.13 | 27000 | 4.0116 |
| 1.1902 | 3.24 | 28000 | 4.0240 |
| 1.2728 | 3.36 | 29000 | 4.3068 |
| 1.2586 | 3.47 | 30000 | 3.7894 |
| 1.3164 | 3.59 | 31000 | 3.9242 |
| 1.3093 | 3.7 | 32000 | 4.0444 |
| 1.2812 | 3.82 | 33000 | 4.1779 |
| 1.3165 | 3.94 | 34000 | 3.6633 |
| 0.8357 | 4.05 | 35000 | 5.8137 |
| 0.9583 | 4.17 | 36000 | 5.3305 |
| 0.9135 | 4.28 | 37000 | 5.4973 |
| 1.0011 | 4.4 | 38000 | 5.0349 |
| 0.9553 | 4.51 | 39000 | 5.2086 |
| 1.0182 | 4.63 | 40000 | 5.1197 |
| 0.9569 | 4.75 | 41000 | 5.4579 |
| 0.9437 | 4.86 | 42000 | 5.4467 |
| 0.9791 | 4.98 | 43000 | 4.7657 |
| 0.648 | 5.09 | 44000 | 6.5780 |
| 0.7528 | 5.21 | 45000 | 6.2827 |
| 0.7247 | 5.33 | 46000 | 6.8500 |
| 0.702 | 5.44 | 47000 | 6.4572 |
| 0.6786 | 5.56 | 48000 | 6.5462 |
| 0.7272 | 5.67 | 49000 | 6.2406 |
| 0.6778 | 5.79 | 50000 | 6.4727 |
| 0.6446 | 5.9 | 51000 | 6.3170 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.8.0+cu101
- Datasets 1.11.0
- Tokenizers 0.10.3
|
educhav/Elijah-DialoGPT-small | educhav | 2021-10-23T02:48:02Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- conversational
---
# Elijah Parker
- Made using DialoGPT (GPT2) algorithm in PyTorch |
espnet/sujay_catslu_map | espnet | 2021-10-22T21:01:58Z | 2 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:catslu",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: zh
datasets:
- catslu
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/sujay_catslu_map`
This model was trained by Sujay S Kumar using catslu recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout e31965d55993766461f0964216a0bb9aea3cfb7a
pip install -e .
cd egs2/catslu/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/sujay_catslu_map
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Oct 3 12:53:16 EDT 2021`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.3a3`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `b41391336042a4876e30d9fe5c66afb4e4be404c`
- Commit date: `Wed Sep 22 10:02:03 2021 -0400`
## asr_train_asr_smaller_aishell_xlsr_raw_zh_word
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave_5best/test|1577|11441|46.1|30.1|23.7|2.5|56.4|81.3|
|inference_asr_model_valid.acc.ave_5best/valid|921|6438|49.4|29.2|21.4|2.7|53.4|79.2|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave_5best/test|1577|45924|74.4|13.0|12.5|3.2|28.8|81.3|
|inference_asr_model_valid.acc.ave_5best/valid|921|26110|77.0|11.9|11.1|2.7|25.7|79.2|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_smaller_aishell_xlsr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp_train_asr_smaller_aishell_xlsr/asr_train_asr_smaller_aishell_xlsr_raw_zh_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models: 5
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/train/speech_shape
- exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/train/text_shape.word
valid_shape_file:
- exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/valid/speech_shape
- exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/valid/wav.scp
- speech
- sound
- - dump/raw/valid/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 2500
token_list:
- <blank>
- <unk>
- ่ช
- ๅฏผ
- inform_ๆไฝ_none
- inform_็ป็นๅ็งฐ_none
- ๅป
- none_none_none
- ๆ
- ๅฐ
- inform_poiๅ็งฐ_none
- unknown
- ่ฆ
- ๅธ
- side
- ไธ
- ไธช
- ่ทฏ
- ๅบ
- ็ฌฌ
- ๅคง
- ๅฟ
- ไฝ
- inform_ๅบๅๅท_none
- ๅฐ
- ๅ
- ็ซ
- ๅฎถ
- ๅ
- ไธญ
- ๅฑฑ
- ๅท
- ๅฅฝ
- ้
- ๅบ
- ็
- ้ข
- ่ฅฟ
- ๅบ
- ไธ
- ่ฝฆ
- ้ณ
- ๅญฆ
- ๅ
- ๅญ
- dialect
- ๅฎ
- ๆฐ
- ๆตท
- ๅ
- ๅ
ฌ
- ๅป
- ไบ
- ไธ
- ไธ
- ๅนฟ
- ๅคฉ
- ๆ
- ๆ
- ้ญ
- ๅผ
- ้
- ไธ
- ๆฑ
- ๆถ
- ไบบ
- ๅธฎ
- ้
- ๆฏ
- ๅ
- ่ฑ
- ่ฟ
- ๆฟ
- ๆฐ
- ๅฃ
- ๅ
- ้
- ๆฒณ
- ๅบ
- ่ฏท
- ๅ
ณ
- ๅฝ
- ไบ
- ๅ
- ้ฃ
- ้ซ
- robot
- ๅบ
- ๅนณ
- ๆน
- ๅจ
- ็
- ๅฎ
- ๅท
- ้จ
- ๆณ
- ่ก
- ๅ
- ้
- ๆฐด
- ้พ
- ไบฌ
- ๅ
- ๅฐ
- ่ก
- ไน
- ไบ
- ้ฝ
- ๆกฅ
- ไธ
- ็ป
- ๆ
- ไธ
- ๅช
- ้
- ๅ
ซ
- ๅฎ
- ๅฟ
- ้ฟ
- ้ฆ
- ็พ
- ่ฟ
- ๆฑฝ
- ๆบ
- ๅทฅ
- ๅบ
- ๆน
- ๅ
- ๅธ
- ็ณ
- ็กฎ
- ๅ
ด
- ็ซ
- ่ตฐ
- ไนก
- ไธ
- ้
- ๅ
- ้ถ
- ้
- ๅ
- ๆ ก
- ้
- ไบค
- ้
- ๅพท
- ้
- ็ต
- ๆฅผ
- ๅฎพ
- ๆพ
- ่
- ๅ
- ๅฏ
- ๆฒน
- ๆ
- ไน
- ๆฏ
- ๆ
- ่พพ
- ๆฅ
- ไธ
- ๅท
- inform_่ฏทๆฑ็ฑปๅ_none
- ๆ
- noise
- ๅ
ฐ
- ๆนพ
- ๅฐ
- ๆ
- ไฟ
- ไป
- ็ฆ
- ๅปบ
- ่ฏด
- ๅฐฑ
- ๆฒ
- ้กต
- ๅฎ
- ๅญ
- ๅ
- ็ง
- ๅฐ
- ๅ
- inform_้กต็ _none
- ๅ
ญ
- ่ดน
- ็ฏ
- ๆ
- ๆ
- ๅ
- ๆฑ
- ็ฝ
- ้ป
- ้
- ๅฑ
- ๆณ
- ๆ
- ไบ
- ๆญฆ
- ๆบ
- ๅ
- ๅ
- ็น
- ๆถ
- ็ฉ
- ๆปจ
- ๆบช
- ้ฉฌ
- ่ดต
- ๅก
- ไธ
- ๅฒ
- ๆฒก
- ็
- ๅธธ
- ็
- ไผ
- ไปฌ
- ้
- ๆตฆ
- ๅ
- ๅ
- ่ฟ
- ้กบ
- ็พ
- ๅฟ
- ๅคด
- ไน
- ่ฎพ
- ๅฆ
- ๅ
- ้
- ๆถ
- inform_poi็ฎๆ _none
- ็ฐ
- ๅ
- ๆธฏ
- ๆณฐ
- ๅ
- ๅฎ
- ๆ
- ไน
- ๅฏน
- ็ฎก
- ็
- ็
- ๅผ
- ๅบ
- ๆ
- ๅ
- ๅ
- ้ถ
- ่
- ่ฝ
- ้ข
- ๅฎข
- ็บข
- ๆ
- ่ฟ
- ๅค
- ๆดฅ
- ๅง
- ็
- ๅ
- ็จ
- ็
- ๅ
- ้
- ๅธฆ
- ๆต
- ๆจ
- ไน
- ๆฑ
- ๅค
- ไป
- ่ฟ
- ๆธ
- ไธด
- ๆ
- ๆธก
- ๆฅ
- ๅนบ
- ๆต
- ็ฐ
- ้ฆ
- ๅ
- ๅ
- ๅฉ
- ็ฅ
- ้ฅญ
- ้ฆ
- ๅคช
- ๅ
- ๆฐธ
- ๅพ
- ๆดฒ
- ้
- ็น
- ๅง
- request_ไฝ็ฝฎ_none
- ๆ
- ๆ
- ๅฏบ
- ็ฑ
- ไธฐ
- ๆฅ
- ็
- ็ฝ
- ้
- ไน
- ไบ
- ็บฟ
- ็
- ๅฆ
- ่ดธ
- ๆ
- ่ฟ
- ๆญฃ
- ็ป
- ไธ
- ็ฑณ
- ้ฒ
- ่ญฆ
- ไฟก
- ๆท
- ๆ ท
- ๆธฉ
- ๅฒญ
- ไธฝ
- ่ฒ
- ๅค
- ไฝ
- ๅฌ
- ๅจ
- ๅฏ
- ๅ
- ๅนด
- ็ป
- ็บช
- ้ฝ
- ็ดข
- inform_ๅฏน่ฑก_none
- ไน
- ๅค
- ๅซ
- ๅต
- ๆฐ
- ่
- ๆดพ
- ๆฑ
- ๆฒ
- ่ฅ
- ่ฟ
- ็ฝฎ
- ๅ
- ็จ
- ๅ
- ่พ
- ๆน
- ้ณ
- ๅบท
- ๅจ
- ๅนผ
- ๆฏ
- ๅบ
- ๆ
- ๆ
- ๅข
- ้ฃ
- ๅฒ
- ่ฏ
- ๆพ
- ๆณฝ
- ๆ
- ้จ
- ็ฅ
- ๅค
- ๅก
- ๆฒ
- ๅฅ
- ๅซ
- ๆ
- ๅบญ
- ็ผ
- ๆป
- ๆข
- ๆฟ
- ๅ
- ๅ
- ่ช
- ๅญ
- ๅข
- ่ฑช
- ็ด
- ็
- ๅฑฏ
- ่ถ
- ็ฅฅ
- ไฝณ
- ๆ
- ่ฟ
- ไปฅ
- ไธค
- ่
- ไฟฎ
- ๅ
ฅ
- ๆพ
- ้
- ่
- ็
- ๅฏ
- ๅฟซ
- ไธน
- ไฝ
- ไนฆ
- ๆธธ
- ่ฝฌ
- ่ฑ
- ๅฏจ
- ๅ
- ๅฝ
- ๆ
- ้ฑ
- s
- ่ดง
- ๆ
- ๆ ผ
- ๅฒณ
- ๆทฎ
- ๆ
- ็คพ
- ่
- ๆฃฎ
- ๅ ต
- ๅ
- ่
- ๅ
- ๆ
- ๅฏ
- ็ขง
- ๅฐ
- ้ต
- ๆก
- ่พน
- ๅก
- ่ถ
- ๅพ
- ๅ
- ๆป
- ๅ
- ๆ
- ๆ
- ๆญ
- ๅ
- ๆปฉ
- ๆฅ
- ่น
- ็ปญ
- ไธบ
- ้ฉพ
- ่ฎธ
- ๅณฐ
- ้ฎ
- ็
- ่ง
- ้
- ๆฅ
- ่ฏญ
- ๆดช
- ไผ
- ๅ
จ
- ๅพฝ
- ้
- ๅฎ
- ๆช
- ๆญ
- ๅฐ
- ่
- ๅก
- ไบง
- ้ฑผ
- ๅ
- ๅฒธ
- ๆด
- ้
- ๅ
- ้
- ไธ
- ็ปง
- ่ฟช
- ็
- ๅช
- ๆ
- ๆทฑ
- ๅณ
- ้ฉ
- ๆณ
- ็ต
- ่ฟ
- ้ด
- ้ผ
- ๆญฅ
- ๅธ
- ๆ
- ่
- ็ดซ
- ้ข
- ่ตฃ
- ๆจช
- ๆญ
- ้ผ
- ่ฟ
- ๆญข
- ้
- ไพฟ
- ้ธก
- ๅทด
- ไป
- ่ดข
- ไฝ
- ๆก
- ๅฎ
- ่ฑ
- ็ปต
- ๅฅฅ
- ็ฟ
- ๆณข
- ๆฒป
- ๅ
- ้ฆ
- ้
- ่ฎก
- ้ฃ
- ๅ
- ้ฟ
- ไปฃ
- ๅจ
- ๆ
- ๅบ
- ้
- ๅ
- ๆฝญ
- ้
- ่ฃ
- ็บณ
- ไผ
- ๅฐ
- ๅ
- ๅธ
- ้
- ๅฝฑ
- ๆ
- ๆฉ
- ่ฏ
- ๆฏ
- ๆ
- ไบ
- ็ฆป
- ๆ
- ่ฒ
- ๅธ
- ๅผ
- ๅณก
- ่ฟ
- ๅง
- ๅฎ
- ๅ
- ้ด
- ๅฎ
- ้ก
- ๅฅ
- ๆฌ
- ๆด
- ๅ
- ๆฏ
- ๅ
- ้
- ็ปฟ
- ๅฆ
- ๆ
- ๅ ฐ
- ่ฅ
- ็
- ๅทฆ
- ็ฝ
- inform_้็ป็นๅ็งฐ_none
- ๆ
- ๆ
- inform_็ป็นไฟฎ้ฅฐ_none
- ่พฝ
- ็
ค
- ่ฐข
- ๅ
- ๅ
- ่
- ๅ
- ไผฆ
- ๅ
- ๅก
- ่
- ๅบ
- ็ฏ
- ๆ
- ๅฏป
- ๆ
- ๅฑ
- ๅบ
- ่ตต
- ไฝ
- ่ง
- ๆ
- ๆ
- ไบ
- ็ธ
- ๆจ
- inform_็ป็น็ฎๆ _none
- ้ฆจ
- ็จ
- ๅฑ
- ่ต
- ไบ
- ่บ
- ่ถ
- ๅพฎ
- ๅ
- ้
- ่ฎฐ
- ็ช
- ็ปด
- ็ฒ
- ้ซ
- ไผ
- ๅฅ
- ้ก
- ๆธ
- ๅฒฉ
- ๅฝฉ
- ๅฐ
- ๅค
- ๅพ
- ไป
- ๅฐ
- ่
- ่ง
- ้ช
- ๅฎน
- ่จ
- ๆฎ
- ๅผ
- ๅนฒ
- ๅผบ
- ้ฒ
- ๆณ
- ่กก
- ่ง
- request_่ทฏๅต_none
- ้
- ๆฒ
- ๆฟ
- ้ฒ
- ็บฆ
- ็
- ๅฑ
- ่ณ
- ๅ
- ็ฟ
- ๆ
- ๅ
ท
- ็
- ๆฆ
- ๆซ
- ็
ง
- ๆ
- ็ฎ
- t
- ๅ
- ้ฆ
- ๆฅ
- ็
- ่ฝป
- ๆฌฃ
- ๅค
- ไนฐ
- ็ป
- ็
- ไฝ
- ๆฉ
- ๅฅณ
- ๅด
- ็บง
- ๆฏ
- ้ต
- ๆตด
- ่
- ้ป
- ๆจ
- ๆฏ
- ๆพ
- ๆธญ
- ้ข
- ๅฆ
- ๆ
- ๅ
- ็
- ไป
- ๅง
- ๆ
- ่ง
- k
- ๅด
- ๆก
- ๅ
- ่
- ็ฎ
- ไปช
- ่ตค
- ๅฏ
- ๆด
- ็ป
- ้ฅฐ
- ๅพ
- ็
- ๅบฆ
- ่ก
- ๅ
- ้ฎ
- ๅ
- ๅ
- ่ดค
- ๅพก
- ็ถ
- ๆฝ
- ๅบ
- ๅฏ
- ่
- ้ฉถ
- inform_่ทฏ็บฟๅๅฅฝ_none
- ๆพ
- ๅ
- ็ญ
- ๅก
- ็
- ๅ
- ๆฒง
- ไบญ
- ่ง
- ่บ
- ้ข
- ็ง
- ๅ
- ๅจ
- ๅฅ
- ไผ
- ๅ
- ่ดก
- ๅ
- ๅ
- ไป
- ๆ
ข
- ๅป
- ๅ
- ๆฌก
- ็
- ่
- ๅฎ
- ๆณ
- ่ดบ
- ๆ
- ๅฑ
- ็
- ktv
- ๅฏ
- ้กถ
- ๅฆ
- ๅซ
- ๆถฆ
- ่ฐท
- ไป
- ๆ
ง
- ๆฑ
- ้
- ๅบง
- ้
- ้บฆ
- ้
- ็พ
- ๅ
ฑ
- ้
- ่ฃ
- ้ฃ
- ้
- ้
- ๅณ
- ้บ
- ๆข
- ๅฎฃ
- ๅนธ
- ๅฅ
- ๅฃซ
- ๅ
- ๆ
- ็ช
- ๅพ
- ๆฃ
- ๅทท
- ็ง
- ๅ ก
- ่ท
- ๅจ
- ๅณช
- ็ซ
- ๆฐ
- ๆ
- ๅฃ
- ่ดญ
- ๅฐ
- ้ป
- ๅฎ
- ๆก
- ๅ
- ็
- ๅฑฟ
- ้ธ
- ่ถ
- ไปป
- ็ง
- ่
- ่
- ๅฒ
- inform_value_none
- ้ป
- ๅฅ
- ๅ
- ็
- ่
- ๆ
- ่
- ๆขฐ
- ่ฒ
- ้ช
- ่
พ
- ๅค
- ็ผ
- ๅฐน
- ๅฃ
- ๅด
- ็คบ
- ๅซ
- ๅฎซ
- ๅฒ
- ๆฏ
- ็ป
- ่
- ๅ
- ๆต
- ้ต
- ๅ
- ้ฅถ
- ๅท
- ็ฎ
- ๆฝ
- ไฟฑ
- ๅฒ
- ่ฑ
- ๆ
- ้ฉ
- ๅฒ
- ๆป
- ๅถ
- ๅ
- ่
- ๅ
- ๆป
- ็ณป
- ็ป
- e
- ๅ
- ๅทก
- ๅ
- ็
- ็ฉถ
- ็
- ๅ
- ่ฑก
- ๆ
- ๅจ
- ๅ
- ้
- deny_ๆไฝ_none
- ๆท
- ้ข
- ไปท
- ๆด
- ๆ
- ๆบง
- ้
- ๅธ
- ๆญ
- ๆ
- ๆบ
- ่
- ๅบ
- ่
- ๆ
- ๆณก
- ๆด
- ๅ
- ๅ
- ๅก
- ๆน
- ็ธ
- ๆณพ
- ๅ
- ๆ
- ่
- ๆฃ
- ๅ
- ๅตฉ
- ็ฆ
- ่ฎฉ
- ๅคซ
- ๅ
- ็ซฅ
- ่
- ๆบ
- ๆตฉ
- ๆฏ
- ่ฃ
- ็ฆ
- ็ก
- ็ฎ
- ่ดจ
- ๆจฑ
- ้
- ้ธฃ
- ๅฅ
- ้ถ
- ่ฒ
- ๅ
ธ
- ๅ
- ๆต
- ๅบ
- ๅฐ
- ๆฑค
- ๅทฑ
- ๅฎธ
- ๆผณ
- ่ฏ
- ๆฒ
- ๅทฉ
- ๆฌ
- ็ฌจ
- ๆ
- ๆน
- ไธป
- ๆตช
- ๆฎก
- request_ๅๆน่ทฏๅต_none
- ็ซน
- ๅ
- ๅญฃ
- ๅฑ
- ๅ
- ๆณฅ
- ๆ
- ็ง
- ๅ
- ็ฅ
- ๅฃฐ
- ๆฅ
- ๆน
- ๅ
- ้
- ๅจ
- ่ตท
- ๅ
- ๅขจ
- ๅฎฟ
- ็ป
- ่ฅ
- ่ซ
- ่ฆ
- ๆผซ
- ๅณจ
- ้
- ็
- ็ฆ
- ๅฆ
- ๆ น
- ๅ
- ๅผ
- ไฝ
- ้
- ้ฅบ
- ็ฅจ
- ๅถ
- ๅท
- ๆ
- ็ป
- ๆญ
- ๅปถ
- ่
- ่ง
- ่งฃ
- ็ฒ
- ่น
- ๆ
- ็
- ็บฝ
- ้ธ
- ไบ
- ็ช
- ่น
- ไบฒ
- ้
- ่
- ๅ
- ่ท
- ็ช
- ้
- ่ฏ
- ๆก
- ๆฒฅ
- ้
- ๅถ
- ็ฃ
- ่ด
- ๅ
- ่ฏ
- ไพฌ
- ็ง
- ็ฟก
- ๅ ค
- ไผ
- ้ฉผ
- ๆ
- ็
- ้ถ
- ๅฎค
- ่ฝฉ
- ้นฐ
- ้
- ็ฉบ
- ็
- ่ณ
- ๅทฒ
- ็
- ๅง
- ้กฟ
- ้บ
- ไบฟ
- ๅฎ
- ๅ
- ๆท
- ๆพณ
- ๆ
- ๅป
- ๆดป
- ็ผด
- ่พ
- ้
- ้
- ้
- ้ฉ
- ๆข
- ๅธ
- ๆฟ
- ็ฎฌ
- ๆพง
- ๆ
- ็
- ไผ
- ๅช
- ่ก
- ๆ
- ่
- ๅงจ
- ๅฉ
- ่พ
- ่
- ้น
- ๅง
- ๆน
- ไบณ
- b
- ๆฆ
- ็
- ่ฟ
- ๅณ
- ๆฐ
- ๅฆ
- ๅซ
- ๅ
- hi
- ้น
- ๆ
- ๆฆ
- ๆขจ
- ไบฎ
- ็บบ
- ๅฉ
- ๅน
- ่ฎญ
- inform_่ตท็นๅ็งฐ_none
- ๆค
- ้
- ๅ
- ่
- m
- ๅ
- ๆฉ
- ้
- ๆฎต
- ๆฆ
- ้ค
- ๆฉ
- ่ฎฎ
- ไบ
- ๅฉ
- ๆ
- ๆ
- ๆ
- ่ฐ
- ๆฐ
- ไปฝ
- ๅ
ต
- ็ฒฅ
- ้ป
- ๅข
- ้ฌ
- ๆณณ
- ๆ
- ่ฏ
- ็ผ
- ้ผ
- ่ต
- ๆ
- ่
- ้ธฟ
- ๅท
- ๅ
- ๅพ
- ๆฌข
- ้ฏ
- ๆฑ
- ่ฎฒ
- ่ค
- ๅ
- ๆตฎ
- ๅฝ
- ๅฐ
- ๅ
- ็ฎ
- ๆ
- ๅจ
- ่
- ่
- ่
- ๆนฟ
- ่
- ้
- ๆฟ
- ๆฒฃ
- ๆธ
- ้
- ๆค
- ๆ
- ็
- ๅฎ
- ไฝ
- ๆธ
- ๅ
- ไธ
- ๆ
- ้
- ้นค
- ๆ
- ็
- deny_poiๅ็งฐ_none
- ่ฏข
- ๆ
- ๅฏฟ
- ๅฏ
- ๆ
- ๅ
- ็ฐ
- ๅค
- ๅฆ
- ่
- ๆผ
- ๆฟฎ
- ่ฅ
- ๅฏ
- ๅฟ
- ่น
- ๅฝญ
- ้ช
- ๆทป
- ๆปก
- ็ซ
- ้ชจ
- ๆ
- ๅฆ
- ๅ
- ไน
- ๅง
- ็ท
- ้ธ
- ็ง
- ไธ
- ๆดง
- ไพ
- ไป
- ๆจ
- ๆฒ
- ๅธ
- ๆ
- ๅ
- ๆถจ
- ้ป
- ็ท
- ่ฏ
- ่
- ๆก
- ่ฌ
- ๅป
- ๆฎ
- ็
- ๅฟ
- ้
- ่ฒ
- beyond
- i
- love
- you
- ๆ
- ๅฐ
- ้ฉฟ
- ่ฒ
- ่
- ่ถณ
- ่ฟน
- ็ฟฐ
- ๆ
- ็ก
- ๅธ
- ้จ
- ๅ
- ่ฟท
- ๅ
- ๅฌ
- ๅจผ
- ่พ
- ้กพ
- ๆฎท
- ้ต
- ๆฝฎ
- ่
- ๅฝ
- ๆฃ
- ๆ
- ๆด
- ็ป
- ็
- ่ฎค
- ็ฐ
- ้
- ๅฎ
- ๅซ
- ๆฝ
- ็ค
- ็ ด
- ้ถ
- ๆ
- ๅฟ
- ไป
- ้ด
- ๆขง
- ้
- ๆถต
- ้
- ๅ
- ไฟฉ
- ้ฆ
- ็ฃจ
- ้ชค
- ็ฟ
- ่
- ๅธ
- ๅจ
- ๅ
- ๆ
- ๅฃน
- ๅ
- ่
- ๆจ
- ่ฏถ
- ็
- ๆฅ
- ๅช
- ็ผ
- ่พ
- ๅฐฝ
- ๅฐง
- ๆ
- ็
- ๅ
- ๆ
- ็ป
- ็ซฏ
- ็ฑ
- ็
- ่ดฉ
- ๅท
- ๅ
ป
- ้
- ๆ
- ๅทง
- ๆคฟ
- ๆฏ
- ๆฒญ
- ไพ
- ็ง
- ็
- ็ถ
- ็
- ๅ
- ไผค
- ่
- ๅฅ
- ๆ
- ็ฆฝ
- ็ซ
- ็ฐ
- request_ๅฉไฝ่ท็ฆป_none
- ๅบ
- ้น
- ้ฝฟ
- ๅ
- ๅจ
- ๅฟป
- ๅ
- ่
- ่ณ
- ้
- ๅป
- ่
- ็ญ
- g
- ๆฉ
- ็
- ็ง
- ไป
- ่ฃ
- ๆบ
- ็บฑ
- ๅ
- ็พค
- ็
- ็ผ
- ไป
- ่ตถ
- ็ดง
- ้ซ
- ๅถ
- ๆฝผ
- ็ฝ
- ๅพ
- ้ฉฐ
- ้บป
- ็ฆ
- ้
- ๆจ
- ๆต
- ๆ
- ้
ท
- ๆถ
- ็ฉฟ
- ่ฝ
- ๅฎณ
- ้
- ๆฃ
- ๆ ธ
- ๆฉ
- ็ด
- ๆป
- ๆฏ
- ็ฎ
- ๆ ช
- ้
- ๅค
- ็ณ
- ๆง
- ๅ
- ๆน
- ๆป
- ๆฆ
- ็ญ
- ่
- ้
- ๆ
- ๆฝ
- ่
- ่ฑน
- ่ฅ
- ๅ
- ๅพ
- ่ฐ
- ้
- ็ฅ
- ่ฃ
- ๆณผ
- ไนพ
- ็ถ
- ้พ
- ๆค
- ้
- ่ดฃ
- ๅถ
- ๅ
- ้
- ็ข
- ๅ
- ๆคฐ
- ๅฌ
- ไผฏ
- ไนณ
- ้
- ๅฐผ
- ๅข
- ๅฉ
- ๅง
- ๆฑ
- ไฝฟ
- ็ฉ
- ้ฅฎ
- ๅณค
- ็
- ็ป
- ้ธ
- ๆด
- ็ณ
- ็ซ
- ๅผฅ
- ่ง
- ๅด
- ้ฌ
- ่ด
- ้
- ็ฅ
- ๆณ
- ้ฏ
- ไพฏ
- ่ท
- ็ป
- ่ฐ
- ๅต
- ๆฅ
- ็
- ๅฆน
- ่ฏฏ
- ๅฟต
- ้
- ็ฒฎ
- ๆถฎ
- ๅผ
- ้นฟ
- ๆ
- ๆฒ
- ็งป
- ๆถ
- ๆจก
- ้ฅฟ
- ไฝฉ
- ๆฑ
- ๆ
- ้ญ
- ็ป
- ่
- ๆ
- ๆฑ
- ่ฐ
- ๆฃฃ
- ๆ
- ๆญค
- ่
- ้ฒ
- ๅ
- ้ป
- ็ป
- ้
- ็ญ
- ็ฒ
- ๆฑพ
- ่
- ๅ
- ไธ
- ๆ
- ๅฎ
- ็ผ
- ๆ
- ่
- ่ช
- ๅ
- ่ฐฑ
- ็งฏ
- ็คผ
- ๅก
- ่ฝ
- ็พฝ
- ๆญ
- ไปฐ
- ่
- ้ท
- ็ฃ
- ็น
- ๅญ
- ็
- ๆ
- ็ฒค
- ่
- ไน
- ้ข
- ็ป
- ็
- ๅค
- ๅผ
- ๅน
- ่ฎข
- ๅ
- ok
- ็ถ
- ๆ
- ๅฉบ
- ๆฒฟ
- ่
- ๅผ
- ่ต
- ๆข
- ๅฑ
- ็
- ่พ
- ๅฒ
- ๆนซ
- ๅก
- ็
- ๅ
- ๆถ
- ๅทซ
- ่ฟ
- ๆ
- ๅพ
- ่
- ่
- ่ฝฎ
- ่
- ้
- ้นญ
- ๅบ
- ็จ
- ่ฐจ
- ๆ
- ๆทก
- ๆณจ
- ๆฏ
- ๆข
- ๅ
- ๅ
- ไป
- ่ฏธ
- ๆจ
- ๆด
- ็ถฆ
- ไผ
- ่ฏ
- ๅฆ
- ๅ
- ๆฎ
- ้ต
- ๅฝ
- ๅป
- ้บ
- ้บ
- n
- ๆ
- ็ฑ
- ้พ
- ๆญป
- ็ฌ
- ๅญ
- ๅญฉ
- ้ข
- ่
- ๆบถ
- ๅธ
- ๆท
- ๅฅธ
- ๆน
- ่ค
- ็ญ
- ้ง
- ็ฟ
- ้
- ๆ
- ่ฏ
- ๆญ
- ๅฃ
- ไปถ
- ๅท
- ็
- ่
- ๆ
- ๆทน
- ๆกฆ
- ๅนข
- ๆฃ
- ไฟบ
- ๅฑ
- ๅฝฌ
- ็
- ไบฉ
- ๅฃ
- ่ฃด
- ็ฟผ
- ่พฐ
- ๅช
- ๆก
- ๅน
- ๆ
- ็ขฃ
- ๅฆ
- ่ก
- ้ฉป
- ้ข
- ็
- ไบซ
- ๆ
- ๆฑถ
- ๅฏ
- ไป
- ็ฟ
- ๆ
- ๅฐ
- ๆณ
- ไปฒ
- ๅ
- ๆ
- ไป
- ๅ
- ็
- ไฝฐ
- ๆฎ
- ๆ
- ๅด
- ๆฆญ
- ๆฃต
- ๅญ
- ๆฝ
- ไฟ
- ่ก
- ่
- ้
- ๆ
- ็
- ๅฑ
- ่
- ่
- ๅ
- ๅฟ
- ๆผ
- ็ถ
- ๆฏ
- ๅทฎ
- ๅฝป
- ้ญ
- ็ปฅ
- ้ฒ
- ้ฅ
- ๆฃ
- ๆฆ
- ๅฃถ
- ็
- ่
- ็ฃ
- ่พ
- ๆณธ
- ๆท
- a
- ๅ
- ็
- ๆฒฑ
- ็ฆบ
- ๅฎ
- ๅ
- ไฟ
- ็ญ
- ่ดพ
- ๅฎ
- ๆขฏ
- ๅจ
- inform_poiไฟฎ้ฅฐ_none
- ็ก
- ็ข
- request_ๅฉไฝ่ทฏ็จ_none
- ๅ
- ๅญ
- ๆข
- ็ฟ
- ๆต
- ็ณ
- ่
- ๆฉฑ
- ๆ
- ๆต
- ่
- ไน
- ๅน
- ็ฃ
- ๅฟ
- ๆผ
- ๆ
- ่กฃ
- ้ญ
- ๆต
- ๅ
- ๅฆ
- ๅข
- ๆ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_xlsr
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 4
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: 0.10.3a3
distributed: false
```
</details>
## LM config
<details><summary>expand</summary>
```
NONE
```
</details>
|
patrickvonplaten/sat-base | patrickvonplaten | 2021-10-22T17:51:13Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"unispeech-sat",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: sat-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sat-base
This model is a fine-tuned version of [microsoft/unispeech-sat-base](https://huggingface.co/microsoft/unispeech-sat-base) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7014
- Wer: 0.5374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.9958 | 0.69 | 100 | 6.7171 | 1.0 |
| 3.0453 | 1.38 | 200 | 3.0374 | 1.0 |
| 2.9989 | 2.07 | 300 | 2.9807 | 1.0 |
| 2.969 | 2.76 | 400 | 2.9579 | 1.0 |
| 2.903 | 3.45 | 500 | 2.9072 | 1.0 |
| 2.8565 | 4.14 | 600 | 2.8804 | 1.0 |
| 2.8195 | 4.83 | 700 | 2.7916 | 1.0 |
| 2.3134 | 5.52 | 800 | 2.1456 | 1.0004 |
| 1.5475 | 6.21 | 900 | 1.4663 | 0.9549 |
| 1.1295 | 6.9 | 1000 | 1.1140 | 0.7227 |
| 1.0181 | 7.59 | 1100 | 0.9258 | 0.6497 |
| 1.0252 | 8.28 | 1200 | 0.8430 | 0.6255 |
| 0.835 | 8.97 | 1300 | 0.8063 | 0.6032 |
| 0.662 | 9.66 | 1400 | 0.7595 | 0.5931 |
| 0.5558 | 10.34 | 1500 | 0.7322 | 0.5819 |
| 0.7596 | 11.03 | 1600 | 0.7120 | 0.5708 |
| 0.6169 | 11.72 | 1700 | 0.7073 | 0.5606 |
| 0.4565 | 12.41 | 1800 | 0.7124 | 0.5586 |
| 0.4554 | 13.1 | 1900 | 0.6880 | 0.5501 |
| 0.6216 | 13.79 | 2000 | 0.6783 | 0.5494 |
| 0.5393 | 14.48 | 2100 | 0.7067 | 0.5499 |
| 0.4095 | 15.17 | 2200 | 0.7014 | 0.5438 |
| 0.3551 | 15.86 | 2300 | 0.7000 | 0.5426 |
| 0.5112 | 16.55 | 2400 | 0.6866 | 0.5426 |
| 0.5139 | 17.24 | 2500 | 0.7134 | 0.5446 |
| 0.3638 | 17.93 | 2600 | 0.7130 | 0.5434 |
| 0.3327 | 18.62 | 2700 | 0.6980 | 0.5377 |
| 0.4385 | 19.31 | 2800 | 0.7017 | 0.5390 |
| 0.4986 | 20.0 | 2900 | 0.7014 | 0.5374 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
sienog/autonlp-mt5-xlsum-25085641 | sienog | 2021-10-22T17:20:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autonlp",
"unk",
"dataset:sienog/autonlp-data-mt5-xlsum",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- sienog/autonlp-data-mt5-xlsum
co2_eq_emissions: 11.166602089650883
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 25085641
- CO2 Emissions (in grams): 11.166602089650883
## Validation Metrics
- Loss: 1.173471212387085
- Rouge1: 51.7353
- Rouge2: 36.6771
- RougeL: 45.4129
- RougeLsum: 48.8512
- Gen Len: 82.9375
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/sienog/autonlp-mt5-xlsum-25085641
``` |
tiennvcs/bert-base-uncased-finetuned-docvqa | tiennvcs | 2021-10-22T15:49:05Z | 16 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-docvqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-docvqa
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2151 | 0.1 | 1000 | 2.6299 |
| 1.8885 | 0.21 | 2000 | 2.2217 |
| 1.7353 | 0.31 | 3000 | 2.1675 |
| 1.6188 | 0.41 | 4000 | 2.2436 |
| 1.5802 | 0.52 | 5000 | 2.0539 |
| 1.4875 | 0.62 | 6000 | 2.0551 |
| 1.4675 | 0.73 | 7000 | 1.9368 |
| 1.3485 | 0.83 | 8000 | 1.9456 |
| 1.3273 | 0.93 | 9000 | 1.9281 |
| 1.1048 | 1.04 | 10000 | 1.9333 |
| 0.9529 | 1.14 | 11000 | 2.2019 |
| 0.9418 | 1.24 | 12000 | 2.0381 |
| 0.9209 | 1.35 | 13000 | 1.8753 |
| 0.8788 | 1.45 | 14000 | 1.9964 |
| 0.8729 | 1.56 | 15000 | 1.9690 |
| 0.8671 | 1.66 | 16000 | 1.8513 |
| 0.8379 | 1.76 | 17000 | 1.9627 |
| 0.8722 | 1.87 | 18000 | 1.8988 |
| 0.7842 | 1.97 | 19000 | 1.9146 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
huggingartists/pharaoh | huggingartists | 2021-10-22T15:18:57Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/pharaoh",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/pharaoh
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/3bb9817ec1fbf2b9f944e9da3662bee6.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค HuggingArtists Model ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">PHARAOH</div>
<a href="https://genius.com/artists/pharaoh">
<div style="text-align: center; font-size: 14px;">@pharaoh</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from PHARAOH.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/pharaoh).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/pharaoh")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/jefxst5w/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on PHARAOH's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1fqlqxjo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1fqlqxjo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/pharaoh')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/pharaoh")
model = AutoModelWithLMHead.from_pretrained("huggingartists/pharaoh")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
muhtasham/autonlp-Doctor_DE-24595546 | muhtasham | 2021-10-22T12:23:10Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"de",
"dataset:muhtasham/autonlp-data-Doctor_DE",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: de
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- muhtasham/autonlp-data-Doctor_DE
co2_eq_emissions: 210.5957437893554
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 24595546
- CO2 Emissions (in grams): 210.5957437893554
## Validation Metrics
- Loss: 0.3092539310455322
- MSE: 0.30925390124320984
- MAE: 0.25015318393707275
- R2: 0.841926941198094
- RMSE: 0.5561060309410095
- Explained Variance: 0.8427215218544006
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595546
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595546", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595546", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
muhtasham/autonlp-Doctor_DE-24595545 | muhtasham | 2021-10-22T11:59:58Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"de",
"dataset:muhtasham/autonlp-data-Doctor_DE",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: de
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- muhtasham/autonlp-data-Doctor_DE
co2_eq_emissions: 203.30658367993382
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 24595545
- CO2 Emissions (in grams): 203.30658367993382
## Validation Metrics
- Loss: 0.30214861035346985
- MSE: 0.30214861035346985
- MAE: 0.25911855697631836
- R2: 0.8455587614373526
- RMSE: 0.5496804714202881
- Explained Variance: 0.8476610779762268
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595545
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595545", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595545", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
meghana/hitalm-xlmroberta-finetuned | meghana | 2021-10-22T11:51:18Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: hitalm-xlmroberta-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hitalm-xlmroberta-finetuned
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 48 | 5.4501 |
| No log | 2.0 | 96 | 5.2843 |
| No log | 3.0 | 144 | 4.7745 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
anditya/xlm-roberta-base-finetuned-marc-en | anditya | 2021-10-22T11:18:11Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8885
- Mae: 0.4390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1089 | 1.0 | 235 | 0.9027 | 0.4756 |
| 0.9674 | 2.0 | 470 | 0.8885 | 0.4390 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
muhtasham/autonlp-Doctor_DE-24595544 | muhtasham | 2021-10-22T10:51:44Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"de",
"dataset:muhtasham/autonlp-data-Doctor_DE",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: de
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- muhtasham/autonlp-data-Doctor_DE
co2_eq_emissions: 92.87363201770962
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 24595544
- CO2 Emissions (in grams): 92.87363201770962
## Validation Metrics
- Loss: 0.3001164197921753
- MSE: 0.3001164197921753
- MAE: 0.24272102117538452
- R2: 0.8465975006681247
- RMSE: 0.5478288531303406
- Explained Variance: 0.8468209505081177
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595544
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595544", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595544", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
model-attribution-challenge/german-gpt2 | model-attribution-challenge | 2021-10-22T08:58:57Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"de",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-09T20:17:28Z | ---
language: de
widget:
- text: "Heute ist sehr schรถnes Wetter in"
license: mit
---
# German GPT-2 model
In this repository we release (yet another) GPT-2 model, that was trained on various texts for German.
The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model ๐
**Note**: The model was initially released under an anonymous alias (`anonymous-german-nlp/german-gpt2`) so we now "de-anonymize" it.
More details about GPT-2 can be found in the great [Hugging Face](https://huggingface.co/transformers/model_doc/gpt2.html) documentation.
# Changelog
16.08.2021: Public release of re-trained version of our German GPT-2 model with better results.
15.11.2020: Initial release. Please use the tag `v1.0` for [this older version](https://huggingface.co/dbmdz/german-gpt2/tree/v1.0).
# Training corpora
We use pretty much the same corpora as used for training the DBMDZ BERT model, that can be found in [this repository](https://github.com/dbmdz/berts).
Thanks to the awesome Hugging Face team, it is possible to create byte-level BPE with their awesome [Tokenizers](https://github.com/huggingface/tokenizers) library.
With the previously mentioned awesome Tokenizers library we created a 50K byte-level BPE vocab based on the training corpora.
After creating the vocab, we could train the GPT-2 for German on a v3-8 TPU over the complete training corpus for 20 epochs. All hyperparameters
can be found in the official JAX/FLAX documentation [here](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/README.md)
from Transformers.
# Using the model
The model itself can be used in this way:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dbmdz/german-gpt2")
model = AutoModelWithLMHead.from_pretrained("dbmdz/german-gpt2")
```
However, text generation is a bit more interesting, so here's an example that shows how to use the great Transformers *Pipelines* for generating text:
```python
from transformers import pipeline
pipe = pipeline('text-generation', model="dbmdz/german-gpt2",
tokenizer="dbmdz/german-gpt2")
text = pipe("Der Sinn des Lebens ist es", max_length=100)[0]["generated_text"]
print(text)
```
This could output this beautiful text:
```
Der Sinn des Lebens ist es, im Geist zu verweilen, aber nicht in der Welt zu sein, sondern ganz im Geist zu leben.
Die Menschen beginnen, sich nicht nach der Natur und nach der Welt zu richten, sondern nach der Seele,'
```
# License
All models are licensed under [MIT](LICENSE).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/stefan-it/german-gpt/issues/new) ๐ค
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC โค๏ธ
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage ๐ค
|
teacookies/autonlp-roberta-base-squad2-24465525 | teacookies | 2021-10-22T08:23:09Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 63.997230261104875
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465525
- CO2 Emissions (in grams): 63.997230261104875
## Validation Metrics
- Loss: 0.5740988850593567
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465525
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465525", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465525", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465516 | teacookies | 2021-10-22T08:21:22Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 65.5797497320557
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465516
- CO2 Emissions (in grams): 65.5797497320557
## Validation Metrics
- Loss: 0.6545609831809998
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465516
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465516", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465516", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465524 | teacookies | 2021-10-22T08:14:00Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 58.51753681929935
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465524
- CO2 Emissions (in grams): 58.51753681929935
## Validation Metrics
- Loss: 0.5759999752044678
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465524
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465524", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465524", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465523 | teacookies | 2021-10-22T08:13:18Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 56.99866929988893
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465523
- CO2 Emissions (in grams): 56.99866929988893
## Validation Metrics
- Loss: 0.5468788146972656
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465523
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465523", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465523", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465515 | teacookies | 2021-10-22T08:11:45Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 56.45146749922553
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465515
- CO2 Emissions (in grams): 56.45146749922553
## Validation Metrics
- Loss: 0.5932255387306213
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465515
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465515", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465515", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465514 | teacookies | 2021-10-22T08:10:51Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 54.44076291568145
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465514
- CO2 Emissions (in grams): 54.44076291568145
## Validation Metrics
- Loss: 0.5786784887313843
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465514
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465514", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465514", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465518 | teacookies | 2021-10-22T08:04:33Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 45.268576304018616
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465518
- CO2 Emissions (in grams): 45.268576304018616
## Validation Metrics
- Loss: 0.5742421746253967
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465518
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465518", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465518", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
Gigworks/ASR_id | Gigworks | 2021-10-22T07:28:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | # Wav2Vec2-Large-XLSR-Indonesian
Fine-tuned: facebook/wav2vec2-large-xlsr-53 |
soikit/chinese-bert-wwm-chinese_bert_wwm3 | soikit | 2021-10-22T05:09:25Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: chinese-bert-wwm-chinese_bert_wwm3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-bert-wwm-chinese_bert_wwm3
This model is a fine-tuned version of [hfl/chinese-bert-wwm](https://huggingface.co/hfl/chinese-bert-wwm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 72 | 0.4251 |
| No log | 2.0 | 144 | 0.0282 |
| No log | 3.0 | 216 | 0.0048 |
| No log | 4.0 | 288 | 0.0018 |
| No log | 5.0 | 360 | 0.0011 |
| No log | 6.0 | 432 | 0.0006 |
| 0.483 | 7.0 | 504 | 0.0004 |
| 0.483 | 8.0 | 576 | 0.0004 |
| 0.483 | 9.0 | 648 | 0.0002 |
| 0.483 | 10.0 | 720 | 0.0002 |
| 0.483 | 11.0 | 792 | 0.0002 |
| 0.483 | 12.0 | 864 | 0.0001 |
| 0.483 | 13.0 | 936 | 0.0001 |
| 0.0031 | 14.0 | 1008 | 0.0001 |
| 0.0031 | 15.0 | 1080 | 0.0001 |
| 0.0031 | 16.0 | 1152 | 0.0001 |
| 0.0031 | 17.0 | 1224 | 0.0001 |
| 0.0031 | 18.0 | 1296 | 0.0001 |
| 0.0031 | 19.0 | 1368 | 0.0001 |
| 0.0031 | 20.0 | 1440 | 0.0001 |
| 0.0015 | 21.0 | 1512 | 0.0001 |
| 0.0015 | 22.0 | 1584 | 0.0001 |
| 0.0015 | 23.0 | 1656 | 0.0001 |
| 0.0015 | 24.0 | 1728 | 0.0001 |
| 0.0015 | 25.0 | 1800 | 0.0000 |
| 0.0015 | 26.0 | 1872 | 0.0001 |
| 0.0015 | 27.0 | 1944 | 0.0000 |
| 0.001 | 28.0 | 2016 | 0.0000 |
| 0.001 | 29.0 | 2088 | 0.0000 |
| 0.001 | 30.0 | 2160 | 0.0000 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
|
furyhawk/t5-small-finetuned-xsum | furyhawk | 2021-10-22T05:06:57Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 128 | 2.9003 | 19.4784 | 2.8529 | 14.7786 | 15.0614 | 18.9825 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
aditeyabaral/sentencetransformer-distilbert-base-cased | aditeyabaral | 2021-10-21T22:30:29Z | 129 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-distilbert-base-cased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-distilbert-base-cased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-distilbert-base-cased')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-distilbert-base-cased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-distilbert-base-cased)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
pritoms/distilgpt2-finetuned-wikitext2 | pritoms | 2021-10-21T21:16:24Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 130 | 3.1733 |
| No log | 2.0 | 260 | 3.0756 |
| No log | 3.0 | 390 | 3.0540 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
aditeyabaral/sentencetransformer-roberta-base | aditeyabaral | 2021-10-21T18:03:26Z | 5 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-roberta-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-roberta-base')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-roberta-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-roberta-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
abhishek/autonlp-hindi-question-answering-23865268 | abhishek | 2021-10-21T13:51:44Z | 14 | 5 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"hi",
"dataset:abhishek/autonlp-data-hindi-question-answering",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: hi
widget:
- text: "ยดเคธเคคเฅเคถ เคงเคตเคจ เค
เคเคคเคฐเคฟเคเฅเคท เคเฅเคเคฆเฅเคฐยด เคเคฟเคธ เคฐเคพเคเฅเคฏ เคฎเฅเค เคธเฅเคฅเคฟเคค เคนเฅ?"
context: "เคธเคคเฅเคถ เคงเคตเคจ เค
เคเคคเคฐเคฟเคเฅเคท เคเฅเคเคฆเฅเคฐ, เคญเคพเคฐเคคเฅเคฏ เค
เคเคคเคฐเคฟเคเฅเคท เค
เคจเฅเคธเคเคงเคพเคจ เคธเคเคเค เคจ (เคเคธเคฐเฅ) เคเคพ เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคเฅเคเคฆเฅเคฐ เคนเฅเฅค เคฏเคน เคเคเคงเฅเคฐ เคชเฅเคฐเคฆเฅเคถ เคเฅ เคถเฅเคฐเฅเคนเคฐเฅเคเฅเคเคพ เคฎเฅเค เคธเฅเคฅเคฟเคค เคนเฅ, เคเคธเฅ 'เคถเฅเคฐเฅเคนเคฐเฅเคเฅเคเคพ เคฐเฅเคเค' เคฏเคพ 'เคถเฅเคฐเฅเคนเคฐเฅเคเฅเคเคพ เคฒเคพเคเคเคฟเคเค เคฐเฅเคเค' เคเฅ เคจเคพเคฎ เคธเฅ เคญเฅ เคเคพเคจเคพ เคเคพเคคเคพ เคนเฅเฅค 2002 เคฎเฅเค เคเคธเคฐเฅ เคเฅ เคชเฅเคฐเฅเคต เคชเฅเคฐเคฌเคเคงเค เคเคฐ เคตเฅเคเฅเคเคพเคจเคฟเค เคธเคคเฅเคถ เคงเคตเคจ เคเฅ เคฎเคฐเคฃเฅเคชเคฐเคพเคเคค เคเคจเคเฅ เคธเคฎเฅเคฎเคพเคจ เคฎเฅเค เคเคธเคเคพ เคจเคพเคฎ เคฌเคฆเคฒเคพ เคเคฏเคพเฅค เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคฏเคพเคจ เคเฅ เค
เคธเฅเคฎเฅ\u200dเคฌเคฒเฅ เคเฅ เคฒเคฟเค เคฆเฅเคธเคฐเคพ เคญเคตเคจ เคเฅเคจเฅ\u200dเคฆเฅเคฐเฅเคฏ เคฎเคเคคเฅเคฐเคฟเคฎเคเคกเคฒ เคจเฅ 12 เคธเคฟเคคเคฎเฅ\u200dเคฌเคฐ, 2013 เคเฅ เคธเคคเฅเคถ เคงเคตเคจ เค
เคเคคเคฐเคฟเคเฅเคท เคเฅเคจเฅ\u200dเคฆเฅเคฐ, เคถเฅเคฐเฅเคนเคฐเคฟเคเฅเคเคพ เคฎเฅเค เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคฏเคพเคจ เคเฅ เค
เคธเฅเคฎเฅ\u200dเคฌเคฒเฅ เคเฅ เคฒเคฟเค เคฆเฅเคธเคฐเฅ เคญเคตเคจ เคเฅ เคจเคฟเคฐเฅเคฎเคพเคฃ เคเฅ เคฎเคเคเฅเคฐเฅ เคฆเฅเฅค เคเคธ เคชเคฐ 363.95 เคเคฐเฅเคกเคผ เคฐเฅเคชเคฏเฅ เคเฅ เค
เคจเฅเคฎเคพเคจเคฟเคค เคฒเคพเคเคค เคเคเคเฅ, เคเคฟเคธเคฎเฅเค เคธเคพเคค เคเคฐเฅเคกเคผ เคฐเฅเคชเคฏเฅ เคเคพ เคเคฐเฅเค เคตเคฟเคฆเฅเคถเฅ เคฎเฅเคฆเฅเคฐเคพ เคฎเฅเค เคนเฅเคเคพเฅค เคเคธ เคฆเฅเคธเคฐเฅ เคฌเคฟเคฒเฅเคกเคฟเคเค เคเฅ เคเคชเคฒเคฌเฅ\u200dเคง เคนเฅ เคเคพเคจเฅ เคธเฅ เคชเฅเคเคธเคเคฒเคตเฅ เคเคฐ เคเฅเคเคธเคเคฒเคตเฅ เคเฅ เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคซเฅเคฐเฅเคเฅเคตเฅเคเคธเฅ เคฌเคขเคผเฅเคเฅเฅค เคฏเคน เคเฅเคเคธเคเคฒเคตเฅ เคเคฎเคเฅ-III เคเฅ เคเคเฅเคเคฐเคฃ เคเฅ เคฒเคฟเค เคตเคฐเฅเคคเคฎเคพเคจ เคตเฅ\u200dเคนเฅเคเคฒ เค
เคธเฅเคฎเฅ\u200dเคฌเคฒเฅ เคฌเคฟเคฒเฅเคกเคฟเคเค เคเฅ เค
เคคเคฟเคฐเคฟเคเฅ\u200dเคค เคธเฅเคตเคฟเคงเคพ เคฎเฅเคนเฅเคฏเคพ เคเคฐเคพเคฏเฅเคเฅเฅค เคคเฅเคธเคฐเฅ เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคชเฅเคก เคคเคฅเคพ เคญเคตเคฟเคทเฅ\u200dเคฏ เคฎเฅเค เคธเคพเคฎเคพเคจเฅ\u200dเคฏ เคฏเคพเคจ เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคเฅ เคฒเคฟเค เคญเฅ เคเคธเคธเฅ เคเคพเคซเฅ เคธเฅเคตเคฟเคงเคพ เคฎเคฟเคฒเฅเคเฅเฅค[1]\nเคฒเคพเคเค เคชเฅเคก\nเคเคชเคเฅเคฐเคน เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคฏเคพเคจ เคฒเฅเคจเฅเค เคชเฅเคก\nเคเคธ เคฒเคพเคเค เคชเฅเคก เคธเฅ เคเคชเคเฅเคฐเคน เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคฏเคพเคจ เคเคฐ เคธเคเคตเคฐเฅเคงเคฟเคค เคเคชเคเฅเคฐเคน เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคฏเคพเคจ เคเฅ เคฒเคพเคเค เคเคฟเคฏเคพ เคเคฏเคพ เคฅเคพเฅค เคฏเคน เคตเคฐเฅเคคเคฎเคพเคจ เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคธเฅเคฅเคฒ เคเฅ เคฆเคเฅเคทเคฟเคฃเฅ เคธเคฟเคฐเฅ เคชเคฐ เคธเฅเคฅเคฟเคค เคนเฅเฅค เคเคธเฅ เคธเฅเคตเคพเคฎเฅเคเฅเคค เคเคฐ เคฆเคฟเคฏเคพ เคเคฏเคพ เคนเฅเฅค เคถเฅเคฐเฅ เคฎเฅเค เคเคธเฅ เคเคชเคเฅเคฐเคน เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคฏเคพเคจ เคฒเคพเคเค เคเคฐเคจเฅ เคเฅ เคฒเคฟเค เคฌเคจเคพเคฏเคพ เคเคฏเคพ เคฅเคพเฅค เคฒเฅเคเคฟเคจ เคฌเคพเคฆ เคฎเฅเค เคเคธเฅ เคธเคเคตเคฐเฅเคงเคฟเคค เคเคชเคเฅเคฐเคน เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคฏเคพเคจ เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคชเคฐเคฟเคธเคฐ เคเฅ เคฐเฅเคช เคฎเฅเค เคเคธเฅเคคเฅเคฎเคพเคฒ เคเคฟเคฏเคพ เคเคฏเคพ เคฅเคพเฅค\nเคชเฅเคฐเคฅเคฎ เคฒเคพเคเค เคชเฅเคก\nเคฆเฅเคตเคฟเคคเฅเคฏ เคฒเฅเคจเฅเค เคชเฅเคก\nเคคเฅเคคเฅเคฏ เคฒเคพเคเค เคชเฅเคก\nเคธเคจเฅเคฆเคฐเฅเคญ เคถเฅเคฐเฅเคฃเฅ:เคญเคพเคฐเคคเฅเคฏ เค
เคเคคเคฐเคฟเคเฅเคท เค
เคจเฅเคธเคเคงเคพเคจ เคธเคเคเค เคจ\nเคถเฅเคฐเฅเคฃเฅ:เคญเคพเคฐเคค เคเฅ เคฐเฅเคเฅเค เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคธเฅเคฅเคฒ"
datasets:
- abhishek/autonlp-data-hindi-question-answering
co2_eq_emissions: 39.76330395590446
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- CO2 Emissions (in grams): 39.76330395590446
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-hindi-question-answering-23865268
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("abhishek/autonlp-hindi-question-answering-23865268", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-hindi-question-answering-23865268", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
tiennvcs/distilbert-base-uncased-finetuned-infovqa | tiennvcs | 2021-10-21T11:37:56Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-infovqa
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.02 | 100 | 4.7706 |
| No log | 0.05 | 200 | 4.4399 |
| No log | 0.07 | 300 | 3.8175 |
| No log | 0.09 | 400 | 3.8306 |
| 3.3071 | 0.12 | 500 | 3.6480 |
| 3.3071 | 0.14 | 600 | 3.6451 |
| 3.3071 | 0.16 | 700 | 3.4974 |
| 3.3071 | 0.19 | 800 | 3.4686 |
| 3.3071 | 0.21 | 900 | 3.4703 |
| 3.5336 | 0.23 | 1000 | 3.3165 |
| 3.5336 | 0.25 | 1100 | 3.3634 |
| 3.5336 | 0.28 | 1200 | 3.3466 |
| 3.5336 | 0.3 | 1300 | 3.3411 |
| 3.5336 | 0.32 | 1400 | 3.2456 |
| 3.3593 | 0.35 | 1500 | 3.3257 |
| 3.3593 | 0.37 | 1600 | 3.2941 |
| 3.3593 | 0.39 | 1700 | 3.2581 |
| 3.3593 | 0.42 | 1800 | 3.1680 |
| 3.3593 | 0.44 | 1900 | 3.2077 |
| 3.2436 | 0.46 | 2000 | 3.2422 |
| 3.2436 | 0.49 | 2100 | 3.2529 |
| 3.2436 | 0.51 | 2200 | 3.2681 |
| 3.2436 | 0.53 | 2300 | 3.1055 |
| 3.2436 | 0.56 | 2400 | 3.0174 |
| 3.093 | 0.58 | 2500 | 3.0608 |
| 3.093 | 0.6 | 2600 | 3.0200 |
| 3.093 | 0.63 | 2700 | 2.9884 |
| 3.093 | 0.65 | 2800 | 3.0041 |
| 3.093 | 0.67 | 2900 | 2.9700 |
| 3.0087 | 0.69 | 3000 | 3.0993 |
| 3.0087 | 0.72 | 3100 | 3.0499 |
| 3.0087 | 0.74 | 3200 | 2.9317 |
| 3.0087 | 0.76 | 3300 | 3.0817 |
| 3.0087 | 0.79 | 3400 | 3.0035 |
| 2.9694 | 0.81 | 3500 | 3.0850 |
| 2.9694 | 0.83 | 3600 | 2.9948 |
| 2.9694 | 0.86 | 3700 | 2.9874 |
| 2.9694 | 0.88 | 3800 | 2.9202 |
| 2.9694 | 0.9 | 3900 | 2.9322 |
| 2.8277 | 0.93 | 4000 | 2.9195 |
| 2.8277 | 0.95 | 4100 | 2.8638 |
| 2.8277 | 0.97 | 4200 | 2.8809 |
| 2.8277 | 1.0 | 4300 | 2.8872 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
joehdownardkainos/autonlp-intent-modelling-21895237 | joehdownardkainos | 2021-10-21T11:29:28Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autonlp",
"unk",
"dataset:joehdownardkainos/autonlp-data-intent-modelling",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- joehdownardkainos/autonlp-data-intent-modelling
co2_eq_emissions: 1.5688902203257171
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 21895237
- CO2 Emissions (in grams): 1.5688902203257171
## Validation Metrics
- Loss: 1.6614878177642822
- Rouge1: 32.4158
- Rouge2: 24.6194
- RougeL: 29.9278
- RougeLsum: 29.4988
- Gen Len: 58.7778
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/joehdownardkainos/autonlp-intent-modelling-21895237
``` |
anton-l/wav2vec2-base-finetuned-ks | anton-l | 2021-10-21T11:04:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0952
- Accuracy: 0.9823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7908 | 1.0 | 399 | 0.6776 | 0.9009 |
| 0.3202 | 2.0 | 798 | 0.2061 | 0.9763 |
| 0.221 | 3.0 | 1197 | 0.1257 | 0.9785 |
| 0.1773 | 4.0 | 1596 | 0.0990 | 0.9813 |
| 0.1729 | 5.0 | 1995 | 0.0952 | 0.9823 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
BSC-LT/roberta-large-bne | BSC-LT | 2021-10-21T10:32:31Z | 37 | 7 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"national library of spain",
"spanish",
"bne",
"es",
"dataset:bne",
"arxiv:1907.11692",
"arxiv:2107.07253",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
datasets:
- "bne"
metrics:
- "ppl"
widget:
- text: "Este aรฑo las campanadas de La Sexta las <mask> Pedroche y Chicote."
- text: "El artista Antonio Orozco es un colaborador de La <mask>."
- text: "Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje."
- text: "Hay base legal dentro del marco <mask> actual."
---
**โ ๏ธNOTICEโ ๏ธ: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne
# RoBERTa large trained with data from National Library of Spain (BNE)
## Model Description
RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de Espaรฑa)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
## Training corpora and preprocessing
The [National Library of Spain (Biblioteca Nacional de Espaรฑa)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.
To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text.
Some of the statistics of the corpus:
| Corpora | Number of documents | Number of tokens | Size (GB) |
|---------|---------------------|------------------|-----------|
| BNE | 201,080,084 | 135,733,450,668 | 570GB |
## Tokenization and pre-training
The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The RoBERTa-large-bne pre-training consists of a masked language model training that follows the approach employed for the RoBERTa large. The training lasted a total of 96 hours with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.
## Evaluation and results
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiรฉrrez-Fandiรฑo and Jordi Armengol-Estapรฉ and Marc Pร mies and Joan Llop-Palao and Joaquรญn Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
BSC-LT/roberta-large-bne-capitel-pos | BSC-LT | 2021-10-21T10:31:47Z | 12 | 3 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"national library of spain",
"spanish",
"bne",
"capitel",
"pos",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "pos"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
widget:
- text: "Festival de San Sebastiรกn: Johnny Depp recibirรก el premio Donostia en pleno rifirrafe judicial con Amber Heard"
- text: "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto."
- text: "Gracias a los datos de la BNE, se ha podido lograr este modelo del lenguaje."
- text: "El Tribunal Superior de Justicia se pronunciรณ ayer: \"Hay base legal dentro del marco jurรญdico actual\"."
---
**โ ๏ธNOTICEโ ๏ธ: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-capitel-pos
# Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset
RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de Espaรฑa)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne
## Dataset
The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 2).
## Evaluation and results
F1 Score: 0.9851 (average of 5 runs).
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiรฉrrez-Fandiรฑo and Jordi Armengol-Estapรฉ and Marc Pร mies and Joan Llop-Palao and Joaquรญn Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
BSC-LT/roberta-base-bne-capitel-pos | BSC-LT | 2021-10-21T10:29:55Z | 27 | 3 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"national library of spain",
"spanish",
"bne",
"capitel",
"pos",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "pos"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
widget:
- text: "Festival de San Sebastiรกn: Johnny Depp recibirรก el premio Donostia en pleno rifirrafe judicial con Amber Heard"
- text: "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto."
- text: "Gracias a los datos de la BNE, se ha podido lograr este modelo del lenguaje."
- text: "El Tribunal Superior de Justicia se pronunciรณ ayer: \"Hay base legal dentro del marco jurรญdico actual\"."
---
**โ ๏ธNOTICEโ ๏ธ: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-pos
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset
RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de Espaรฑa)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne
## Dataset
The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 2).
## Evaluation and results
F1 Score: 0.9846 (average of 5 runs).
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiรฉrrez-Fandiรฑo and Jordi Armengol-Estapรฉ and Marc Pร mies and Joan Llop-Palao and Joaquรญn Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
BSC-LT/roberta-base-bne-capitel-ner | BSC-LT | 2021-10-21T10:29:35Z | 43 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"national library of spain",
"spanish",
"bne",
"capitel",
"ner",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "ner"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
---
**โ ๏ธNOTICEโ ๏ธ: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-ner
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset.
RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de Espaรฑa)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne
## Dataset
The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1).
## Evaluation and results
F1 Score: 0.8960
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiรฉrrez-Fandiรฑo and Jordi Armengol-Estapรฉ and Marc Pร mies and Joan Llop-Palao and Joaquรญn Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
BSC-LT/roberta-base-biomedical-clinical-es | BSC-LT | 2021-10-21T10:28:12Z | 12 | 7 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"biomedical",
"clinical",
"spanish",
"es",
"arxiv:2109.03570",
"arxiv:2109.07765",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language:
- es
tags:
- biomedical
- clinical
- spanish
license: apache-2.0
metrics:
- ppl
widget:
- text: "El รบnico antecedente personal a reseรฑar era la <mask> arterial."
- text: "Las radiologรญas รณseas de cuerpo entero no detectan alteraciones <mask>, ni alteraciones vertebrales."
- text: "En el <mask> toraco-abdรณmino-pรฉlvico no se encontraron hallazgos patolรณgicos de interรฉs."
---
**โ ๏ธNOTICEโ ๏ธ: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es
# Biomedical-clinical language model for Spanish
Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official [repository](https://github.com/PlanTL-SANIDAD/lm-biomedical-clinical-es) and read our [preprint](https://arxiv.org/abs/2109.03570) "_Carrino, C. P., Armengol-Estapรฉ, J., Gutiรฉrrez-Fandiรฑo, A., Llop-Palao, J., Pร mies, M., Gonzalez-Agirre, A., & Villegas, M. (2021). Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario._".
## Tokenization and model pretraining
This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a
**biomedical-clinical** corpus in Spanish collected from several sources (see next section).
The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2)
used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences.
## Training corpora and preprocessing
The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers, and a real-world clinical corpus collected from more than 278K clinical documents and notes. To obtain a high-quality training corpus while retaining the idiosyncrasies of the clinical language, a cleaning pipeline has been applied only to the biomedical corpora, keeping the clinical corpus uncleaned. Essentially, the cleaning operations used are:
- data parsing in different formats
- sentence splitting
- language detection
- filtering of ill-formed sentences
- deduplication of repetitive contents
- keep the original document boundaries
Then, the biomedical corpora are concatenated and further global deduplication among the biomedical corpora have been applied.
Eventually, the clinical corpus is concatenated to the cleaned biomedical corpus resulting in a medium-size biomedical-clinical corpus for Spanish composed of more than 1B tokens. The table below shows some basic statistics of the individual cleaned corpora:
| Name | No. tokens | Description |
|-----------------------------------------------------------------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Medical crawler](https://zenodo.org/record/4561970) | 745,705,946 | Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains. |
| Clinical cases misc. | 102,855,267 | A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document. |
| Clinical notes/documents | 91,250,080 | Collection of more than 278K clinical documents, including discharge reports, clinical course notes and X-ray reports, for a total of 91M tokens. |
| [Scielo](https://github.com/PlanTL-SANIDAD/SciELO-Spain-Crawler) | 60,007,289 | Publications written in Spanish crawled from the Spanish SciELO server in 2017. |
| [BARR2_background](https://temu.bsc.es/BARR2/downloads/background_set.raw_text.tar.bz2) | 24,516,442 | Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines. |
| Wikipedia_life_sciences | 13,890,501 | Wikipedia articles crawled 04/01/2021 with the [Wikipedia API python library](https://pypi.org/project/Wikipedia-API/) starting from the "Ciencias\_de\_la\_vida" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content. |
| Patents | 13,463,387 | Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: "A61B", "A61C","A61F", "A61H", "A61K", "A61L","A61M", "A61B", "A61P". |
| [EMEA](http://opus.nlpl.eu/download.php?f=EMEA/v3/moses/en-es.txt.zip) | 5,377,448 | Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency. |
| [mespen_Medline](https://zenodo.org/record/3562536#.YTt1fH2xXbR) | 4,166,077 | Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources are aggregated from the MedlinePlus source. |
| PubMed | 1,858,966 | Open-access articles from the PubMed repository crawled in 2017. |
## Evaluation and results
The model has been evaluated on the Named Entity Recognition (NER) using the following datasets:
- [PharmaCoNER](https://zenodo.org/record/4270158): is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/).
- [CANTEMIST](https://zenodo.org/record/3978041#.YTt5qH2xXbQ): is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ).
- ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables.
The evaluation results are compared against the [mBERT](https://huggingface.co/bert-base-multilingual-cased) and [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) models:
| F1 - Precision - Recall | roberta-base-biomedical-clinical-es | mBERT | BETO |
|---------------------------|----------------------------|-------------------------------|-------------------------|
| PharmaCoNER | **90.04** - **88.92** - **91.18** | 87.46 - 86.50 - 88.46 | 88.18 - 87.12 - 89.28 |
| CANTEMIST | **83.34** - **81.48** - **85.30** | 82.61 - 81.12 - 84.15 | 82.42 - 80.91 - 84.00 |
| ICTUSnet | **88.08** - **84.92** - **91.50** | 86.75 - 83.53 - 90.23 | 85.95 - 83.10 - 89.02 |
## Intended uses & limitations
The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section)
However, the is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification.
## Cite
If you use our models, please cite our latest preprint:
```bibtex
@misc{carrino2021biomedical,
title={Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario},
author={Casimiro Pio Carrino and Jordi Armengol-Estapรฉ and Asier Gutiรฉrrez-Fandiรฑo and Joan Llop-Palao and Marc Pร mies and Aitor Gonzalez-Agirre and Marta Villegas},
year={2021},
eprint={2109.03570},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
If you use our Medical Crawler corpus, please cite the preprint:
```bibtex
@misc{carrino2021spanish,
title={Spanish Biomedical Crawled Corpus: A Large, Diverse Dataset for Spanish Biomedical Language Models},
author={Casimiro Pio Carrino and Jordi Armengol-Estapรฉ and Ona de Gibert Bonet and Asier Gutiรฉrrez-Fandiรฑo and Aitor Gonzalez-Agirre and Martin Krallinger and Marta Villegas},
year={2021},
eprint={2109.07765},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
---
## How to use
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("BSC-TeMU/roberta-base-biomedical-es")
model = AutoModelForMaskedLM.from_pretrained("BSC-TeMU/roberta-base-biomedical-es")
from transformers import pipeline
unmasker = pipeline('fill-mask', model="BSC-TeMU/roberta-base-biomedical-es")
unmasker("El รบnico antecedente personal a reseรฑar era la <mask> arterial.")
```
```
# Output
[
{
"sequence": " El รบnico antecedente personal a reseรฑar era la hipertensiรณn arterial.",
"score": 0.9855039715766907,
"token": 3529,
"token_str": " hipertensiรณn"
},
{
"sequence": " El รบnico antecedente personal a reseรฑar era la diabetes arterial.",
"score": 0.0039140828885138035,
"token": 1945,
"token_str": " diabetes"
},
{
"sequence": " El รบnico antecedente personal a reseรฑar era la hipotensiรณn arterial.",
"score": 0.002484665485098958,
"token": 11483,
"token_str": " hipotensiรณn"
},
{
"sequence": " El รบnico antecedente personal a reseรฑar era la Hipertensiรณn arterial.",
"score": 0.0023484621196985245,
"token": 12238,
"token_str": " Hipertensiรณn"
},
{
"sequence": " El รบnico antecedente personal a reseรฑar era la presiรณn arterial.",
"score": 0.0008009297889657319,
"token": 2267,
"token_str": " presiรณn"
}
]
``` |
pritoms/distilgpt2-finetuned-mit-lecture | pritoms | 2021-10-21T08:59:34Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-mit-lecture
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-mit-lecture
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 144 | 3.8737 |
| No log | 2.0 | 288 | 3.8436 |
| No log | 3.0 | 432 | 3.8377 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
bochaowei/t5-small-finetuned-xsum-wei2 | bochaowei | 2021-10-21T07:21:16Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum-wei2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 29.2287
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-wei2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4131
- Rouge1: 29.2287
- Rouge2: 8.4073
- Rougel: 23.0934
- Rougelsum: 23.0954
- Gen Len: 18.8236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.633 | 1.0 | 17004 | 2.4131 | 29.2287 | 8.4073 | 23.0934 | 23.0954 | 18.8236 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
cactode/gpt2_urbandict_textgen | cactode | 2021-10-21T06:43:28Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | # GPT2 Fine Tuned on UrbanDictionary
Honestly a little horrifying, but still funny.
## Usage
Use with GPT2Tokenizer. Pad token should be set to the EOS token.
Inputs should be of the form "define <your word>: ".
## Training Data
All training data was obtained from [Urban Dictionary Words And Definitions on Kaggle](https://www.kaggle.com/therohk/urban-dictionary-words-dataset). Data was additionally filtered, normalized, and spell-checked.
## Bias
This model was trained on public internet data and will almost definitely produce offensive results. Some efforts were made to reduce this (i.e definitions with ethnic / gender-based slurs were removed), but the final model should not be trusted to produce non-offensive definitions. |
huggingtweets/s66jewelevans | huggingtweets | 2021-10-20T23:06:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/s66jewelevans/1634771194675/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1313199276852342784/fJ8Lb2C__400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jewel Evans</div>
<div style="text-align: center; font-size: 14px;">@s66jewelevans</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jewel Evans.
| Data | Jewel Evans |
| --- | --- |
| Tweets downloaded | 1714 |
| Retweets | 2 |
| Short tweets | 20 |
| Tweets kept | 1692 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ec5yuuj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @s66jewelevans's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kxbfdnt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kxbfdnt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/s66jewelevans')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
bochaowei/t5-small-finetuned-cnn-wei0 | bochaowei | 2021-10-20T18:58:40Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnn-wei0
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.2324
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-wei0
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7149
- Rouge1: 24.2324
- Rouge2: 11.7178
- Rougel: 20.0508
- Rougelsum: 22.8698
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9068 | 1.0 | 4786 | 1.7149 | 24.2324 | 11.7178 | 20.0508 | 22.8698 | 19.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
monologg/koelectra-base-generator | monologg | 2021-10-20T16:55:00Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"korean",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: ko
license: apache-2.0
tags:
- korean
---
# KoELECTRA (Base Generator)
Pretrained ELECTRA Language Model for Korean (`koelectra-base-generator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-base-generator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-generator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-generator")
>>> tokenizer.tokenize("[CLS] ํ๊ตญ์ด ELECTRA๋ฅผ ๊ณต์ ํฉ๋๋ค. [SEP]")
['[CLS]', 'ํ๊ตญ์ด', 'E', '##L', '##EC', '##T', '##RA', '##๋ฅผ', '๊ณต์ ', '##ํฉ๋๋ค', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', 'ํ๊ตญ์ด', 'E', '##L', '##EC', '##T', '##RA', '##๋ฅผ', '๊ณต์ ', '##ํฉ๋๋ค', '.', '[SEP]'])
[2, 18429, 41, 6240, 15229, 6204, 20894, 5689, 12622, 10690, 18, 3]
```
## Example using ElectraForMaskedLM
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="monologg/koelectra-base-generator",
tokenizer="monologg/koelectra-base-generator"
)
print(fill_mask("๋๋ {} ๋ฐฅ์ ๋จน์๋ค.".format(fill_mask.tokenizer.mask_token)))
```
|
monologg/koelectra-base-v2-discriminator | monologg | 2021-10-20T16:54:30Z | 48 | 1 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"korean",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: ko
license: apache-2.0
tags:
- korean
---
# KoELECTRA v2 (Base Discriminator)
Pretrained ELECTRA Language Model for Korean (`koelectra-base-v2-discriminator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-base-v2-discriminator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v2-discriminator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v2-discriminator")
>>> tokenizer.tokenize("[CLS] ํ๊ตญ์ด ELECTRA๋ฅผ ๊ณต์ ํฉ๋๋ค. [SEP]")
['[CLS]', 'ํ๊ตญ์ด', 'EL', '##EC', '##TRA', '##๋ฅผ', '๊ณต์ ', '##ํฉ๋๋ค', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', 'ํ๊ตญ์ด', 'EL', '##EC', '##TRA', '##๋ฅผ', '๊ณต์ ', '##ํฉ๋๋ค', '.', '[SEP]'])
[2, 5084, 16248, 3770, 19059, 29965, 2259, 10431, 5, 3]
```
## Example using ElectraForPreTraining
```python
import torch
from transformers import ElectraForPreTraining, ElectraTokenizer
discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-base-v2-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v2-discriminator")
sentence = "๋๋ ๋ฐฉ๊ธ ๋ฐฅ์ ๋จน์๋ค."
fake_sentence = "๋๋ ๋ด์ผ ๋ฐฅ์ ๋จน์๋ค."
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
print(list(zip(fake_tokens, predictions.tolist()[1:-1])))
```
|
monologg/koelectra-base-v3-discriminator | monologg | 2021-10-20T16:53:40Z | 31,234 | 30 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"korean",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: ko
license: apache-2.0
tags:
- korean
---
# KoELECTRA v3 (Base Discriminator)
Pretrained ELECTRA Language Model for Korean (`koelectra-base-v3-discriminator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-base-v3-discriminator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator")
>>> tokenizer.tokenize("[CLS] ํ๊ตญ์ด ELECTRA๋ฅผ ๊ณต์ ํฉ๋๋ค. [SEP]")
['[CLS]', 'ํ๊ตญ์ด', 'EL', '##EC', '##TRA', '##๋ฅผ', '๊ณต์ ', '##ํฉ๋๋ค', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', 'ํ๊ตญ์ด', 'EL', '##EC', '##TRA', '##๋ฅผ', '๊ณต์ ', '##ํฉ๋๋ค', '.', '[SEP]'])
[2, 11229, 29173, 13352, 25541, 4110, 7824, 17788, 18, 3]
```
## Example using ElectraForPreTraining
```python
import torch
from transformers import ElectraForPreTraining, ElectraTokenizer
discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-base-v3-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator")
sentence = "๋๋ ๋ฐฉ๊ธ ๋ฐฅ์ ๋จน์๋ค."
fake_sentence = "๋๋ ๋ด์ผ ๋ฐฅ์ ๋จน์๋ค."
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
print(list(zip(fake_tokens, predictions.tolist()[1:-1])))
```
|
jbarry/irish-gpt2 | jbarry | 2021-10-20T16:40:12Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | This model was trained on the OSCAR ga dataset for experimental purposes. The files used for training the tokenizer and model are included in this repository. |
bochaowei/t5-small-finetuned-xsum-wei0 | bochaowei | 2021-10-20T15:10:46Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum-wei0
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 25.7398
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-wei0
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6289
- Rouge1: 25.7398
- Rouge2: 6.1361
- Rougel: 19.8262
- Rougelsum: 19.8284
- Gen Len: 18.7984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.858 | 1.0 | 1701 | 2.6289 | 25.7398 | 6.1361 | 19.8262 | 19.8284 | 18.7984 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
YushiUeda/test | YushiUeda | 2021-10-20T14:48:21Z | 4 | 0 | espnet | [
"espnet",
"audio",
"diarization",
"dataset:mini_librispeech",
"license:cc-by-4.0",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- diarization
language:
datasets:
- mini_librispeech
license: cc-by-4.0
---
## ESPnet2 DIAR model
### `YushiUeda/test`
This model was trained by Yushi Ueda using mini_librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 4dfa2be4331d3d68f124aa5fd81f63217a7278a4
pip install -e .
cd egs2/mini_librispeech/diar1
./run.sh --skip_data_prep false --skip_train true --download_model YushiUeda/test
```
<!-- Generated by scripts/utils/show_diar_result.sh -->
# RESULTS
## Environments
- date: `Wed Aug 25 23:29:07 EDT 2021`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.2a1`
- pytorch version: `pytorch 1.9.0+cu102`
- Git hash: `19bcd34f9395e01e54a97c4db5ecbcedb429dd92`
- Commit date: `Tue Aug 24 19:50:44 2021 -0400`
## `diar_train_diar_raw_max_epoch20`
### DER
`dev_clean_2_ns2_beta2_500`
|threshold_median_collar|DER|
|---|---|
|result_th0.3_med1_collar0.0|32.42|
|result_th0.3_med11_collar0.0|32.03|
|result_th0.4_med1_collar0.0|30.96|
|result_th0.4_med11_collar0.0|30.26|
|result_th0.5_med1_collar0.0|30.35|
|result_th0.5_med11_collar0.0|29.37|
|result_th0.6_med1_collar0.0|30.77|
|result_th0.6_med11_collar0.0|29.52|
|result_th0.7_med1_collar0.0|32.60|
|result_th0.7_med11_collar0.0|31.03|
## DIAR config
<details><summary>expand</summary>
```
config: conf/train_diar.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/diar_train_diar_raw_max_epoch20
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 20
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 3
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/diar_stats_8k/train/speech_shape
- exp/diar_stats_8k/train/spk_labels_shape
valid_shape_file:
- exp/diar_stats_8k/valid/speech_shape
- exp/diar_stats_8k/valid/spk_labels_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 200000
chunk_shift_ratio: 0.5
num_cache_chunks: 64
train_data_path_and_name_and_type:
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
valid_data_path_and_name_and_type:
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.01
scheduler: noamlr
scheduler_conf:
warmup_steps: 1000
num_spk: 2
init: xavier_uniform
input_size: null
model_conf:
loss_type: pit
use_preprocessor: true
frontend: default
frontend_conf:
fs: 8k
hop_length: 128
normalize: global_mvn
normalize_conf:
stats_file: exp/diar_stats_8k/train/feats_stats.npz
encoder: transformer
encoder_conf:
input_layer: linear
num_blocks: 2
linear_units: 512
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
decoder: linear
decoder_conf: {}
label_aggregator: label_aggregator
label_aggregator_conf: {}
required:
- output_dir
version: 0.10.2a1
distributed: false
```
</details>
|
huggingtweets/dril-linaarabii | huggingtweets | 2021-10-20T11:36:30Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/dril-linaarabii/1634729786636/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1423543147305619456/9RT-Ji0Z_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI CYBORG ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Lina Arabi</div>
<div style="text-align: center; font-size: 14px;">@dril-linaarabii</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Lina Arabi.
| Data | wint | Lina Arabi |
| --- | --- | --- |
| Tweets downloaded | 3227 | 3130 |
| Retweets | 473 | 896 |
| Short tweets | 317 | 322 |
| Tweets kept | 2437 | 1912 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yq3shwo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-linaarabii's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/21rpwe17) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/21rpwe17/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-linaarabii')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
facebook/hubert-xlarge-ll60k | facebook | 2021-10-20T10:20:44Z | 794 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"hubert",
"feature-extraction",
"speech",
"en",
"dataset:libri-light",
"arxiv:2106.07447",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- libri-light
tags:
- speech
license: apache-2.0
---
# Hubert-Extra-Large
[Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression)
The extra large model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
The model was pretrained on [Libri-Light](https://github.com/facebookresearch/libri-light).
[Paper](https://arxiv.org/abs/2106.07447)
Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed
**Abstract**
Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `HubertForCTC`. |
aditeyabaral/sentencetransformer-distilbert-hinglish-small | aditeyabaral | 2021-10-20T09:04:04Z | 173 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-distilbert-hinglish-small
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-distilbert-hinglish-small')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-small')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-small')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-distilbert-hinglish-small)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
lapcameraatp/cameragiamsat | lapcameraatp | 2021-10-20T08:53:25Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | ERROR: type should be string, got "https://camerasaigon24h.com\nhttps://cameragiamsat360.com\nhttps://lapdatcameracongty.vn\nhttps://lapdatcamerawifi.vn\nhttps://lapcamerawifi.com\nhttps://giacameraquansat.com\nhttps://cameraquansatre.com\nhttps://cameraanninhwifi.com\n\nhttps://camerawifigiadinh.com/\nhttps://lapcameratanphu.com\nhttp://camerathehemoi.com\nhttp://lapcameratanbinh.com\nhttp://lapcamerabinhtan.com\nhttp://lapcameraquan2giare.com\nhttp://cameraquan12.com\nhttp://cameraquan3giare.com\nhttp://lapdatcameraquan4.com\nhttp://lapdatcameraquan10.com\nhttp://lapdatcameraquan7.com\nhttp://camerabinhthanh.com\nhttp://lapcameraquan9giare.com\nhttp://lapdatcameraquan11.com\nhttp://lapcameragiarethuduc.com\nhttp://lapdatcameraquan6.com\nhttp://lapdatcameraquan5.com\nhttp://lapcameraquan1.com\nhttp://cameraquan8.com\nhttp://cameranhatranggiare.com\nhttp://lapcamerahocmon.com\nhttp://lapcameragiaregovap.com\nhttp://lapcameraphunhuan.com\nhttp://cameragiarebinhduong.com\nhttp://phanphoicameragiare.com\nhttp://camerawifigiadinh.com/\nhttp://cameraphanthietgiare.com/" |
mrm8488/t5-base-finetuned-break_data | mrm8488 | 2021-10-20T08:31:28Z | 962 | 3 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:break_data",
"arxiv:1910.10683",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- break_data
widget:
- text: "paraphrase: The composer of Sands Theme plays what type of guitar?"
---
# T5-base fine-tuned on break_data / QDMR-high-level โโก๏ธ๐
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [break_data](https://huggingface.co/nlp/viewer/?dataset=break_data&config=QDMR-high-level) dataset for **QDMRs**.
## Details of T5 ๐ โก๏ธ ๐
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new โColossal Clean Crawled Corpusโ, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Details of the downstream task (QDMRs) - Dataset ๐
Break is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations (QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases. This repository contains the Break dataset along with information on the exact data format.
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| break_data | train | 17503 |
| break_data | valid | 3130 |
Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/)
## Model fine-tuning ๐๏ธโ
The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). The main change is at preprocessing ```inputs``` and ```targets``` we feed to the model. We do it as a *paraphrasing task*.
## Model in Action ๐
```python
# Tip: By now, install transformers from source
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-break_data")
model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/t5-base-finetuned-break_data")
def get_decomposition(question):
input_text = "paraphrase: %s </s>" % question
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'],
max_length=32)
return tokenizer.decode(output[0])
question = "The composer of Sands Theme plays what type of guitar?"
get_decomposition(question)
# output: 'return Sands Theme ;return composer of #1 ;return guitar that #2 plays'
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
aditeyabaral/sentencetransformer-bert-hinglish-small | aditeyabaral | 2021-10-20T06:28:16Z | 9 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-bert-hinglish-small
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-bert-hinglish-small')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-small')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-small')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-bert-hinglish-small)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Edomonndo/opus-mt-ja-en-finetuned-ja-to-en_test | Edomonndo | 2021-10-20T06:22:41Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model_index:
- name: opus-mt-ja-en-finetuned-ja-to-en_test
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metric:
name: Bleu
type: bleu
value: 80.2723
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ja-en-finetuned-ja-to-en_test
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4737
- Bleu: 80.2723
- Gen Len: 16.5492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.1237 | 1.0 | 247 | 0.6131 | 60.9383 | 16.4152 |
| 0.5395 | 2.0 | 494 | 0.5274 | 67.5705 | 16.2883 |
| 0.3584 | 3.0 | 741 | 0.5122 | 71.3098 | 16.3777 |
| 0.2563 | 4.0 | 988 | 0.4887 | 73.6639 | 16.401 |
| 0.138 | 5.0 | 1235 | 0.4796 | 76.7942 | 16.4873 |
| 0.0979 | 6.0 | 1482 | 0.4849 | 76.9404 | 16.6162 |
| 0.0792 | 7.0 | 1729 | 0.4806 | 78.9831 | 16.5442 |
| 0.0569 | 8.0 | 1976 | 0.4765 | 79.3461 | 16.4873 |
| 0.0299 | 9.0 | 2223 | 0.4751 | 79.7901 | 16.4863 |
| 0.0204 | 10.0 | 2470 | 0.4737 | 80.2723 | 16.5492 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.3
|
chrisjay/masakhane_benchmarks | chrisjay | 2021-10-20T05:55:51Z | 0 | 0 | null | [
"african-languages",
"machine-translation",
"text",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: african-languages
tags:
- african-languages
- machine-translation
- text
license: apache-2.0
model-index:
- name: Masakhane Benchmark Models
results:
- task:
name: Machine Translation
type: machine-translation
dataset:
name: masakhane benchmarks
args: african-languages
---
# Interacting with the Masakhane Benchmark Models
I created this demo for very easy interaction with the [benchmark models on Masakhane](https://github.com/masakhane-io/masakhane-mt/tree/master/benchmarks) which were trained with [JoeyNMT](https://github.com/chrisemezue/joeynmt)(my forked version).
To access the space click [here](https://huggingface.co/spaces/chrisjay/masakhane-benchmarks).
To include your language, all you need to do is:
1. Create a folder in the format *src-tgt/main* for your language pair, if it does not exist.
2. Inside the *main* folder put the following files:
1. model checkpoint. Rename it to `best.ckpt`.
2. `config.yaml` file. This is the JoeyNMT config file which loads the model an pre-processing parameters.
3. `src_vocab.txt` file.
4. `trg_vocab.txt` file.
The space currently supports these languages:
| source language | target language |
|:---------------:|:---------------:|
| English | Swahili |
| English | Afrikaans |
| English | Arabic |
| English | Urhobo |
| English | แบธฬdรณ |
| Efik | English |
| English | Hausa |
| English | Igbo |
| English | Fon |
| English | Twi |
| English | Dendi |
| English | แบธฬsรกn |
| English | Isoko |
| English | Kamba |
| English | Luo |
| English | Southern Ndebele |
| English | Tshivenda |
| Shona | English |
| Swahili | English |
| Yoruba | English |
TO DO:
1. Include more languages from the benchmark. |
Manishl7/xlm-roberta-large-language-detection | Manishl7 | 2021-10-20T05:20:44Z | 20 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | Language Detection Model for Nepali, English, Hindi and Spanish
Model fine tuned on xlm-roberta-large |
yazdipour/text-to-sparql-t5-base-qald9 | yazdipour | 2021-10-19T23:25:20Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
model-index:
- name: sparql-qald9-t5-base-2021-10-19_23-02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparql-qald9-t5-base-2021-10-19_23-02
This model is a fine-tuned version of [yazdipour/text-to-sparql-t5-base-2021-10-19_15-35_lastDS](https://huggingface.co/yazdipour/text-to-sparql-t5-base-2021-10-19_15-35_lastDS) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:-----------------------------------------------------------------------------:|:-------:|
| No log | 1.0 | 51 | 1.8300 | 19.0 | 0.3640 | 0.0346 | 0.1943 | 10.0358 | [72.88988261598658, 50.27455765710799, 35.93015446608462, 28.454070201643017] | 0.2281 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
aditeyabaral/sentencetransformer-roberta-hinglish-big | aditeyabaral | 2021-10-19T22:41:56Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-roberta-hinglish-big
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-roberta-hinglish-big')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-roberta-hinglish-big')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-roberta-hinglish-big')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-roberta-hinglish-big)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
huggingtweets/iamdevloper | huggingtweets | 2021-10-19T20:59:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/iamdevloper/1634677176847/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1178631635606151168/yIlrcg4o_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">I Am Devloper</div>
<div style="text-align: center; font-size: 14px;">@iamdevloper</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from I Am Devloper.
| Data | I Am Devloper |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 190 |
| Short tweets | 233 |
| Tweets kept | 2821 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2k1120ro/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iamdevloper's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wr63mia) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wr63mia/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/iamdevloper')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
aditeyabaral/sentencetransformer-bert-hinglish-big | aditeyabaral | 2021-10-19T19:38:38Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-bert-hinglish-big
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-bert-hinglish-big')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-big')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-big')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-bert-hinglish-big)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
hugggof/ConvTasNet-DAMP-Vocals | hugggof | 2021-10-19T19:28:08Z | 0 | 2 | null | [
"audacity",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- audacity
inference: false
sample_rate: 8000
---
This is an Audacity wrapper for the model, forked from the repository `groadabike/ConvTasNet_DAMP-VSEP_enhboth`,
This model was trained using the Asteroid library: https://github.com/asteroid-team/asteroid.
The following info was copied directly from `groadabike/ConvTasNet_DAMP-VSEP_enhboth`:
### Description:
This model was trained by Gerardo Roa Dabike using Asteroid. It was trained on the enh_both task of the DAMP-VSEP dataset.
### Training config:
```yaml
data:
channels: 1
n_src: 2
root_path: data
sample_rate: 16000
samples_per_track: 10
segment: 3.0
task: enh_both
filterbank:
kernel_size: 20
n_filters: 256
stride: 10
main_args:
exp_dir: exp/train_convtasnet
help: None
masknet:
bn_chan: 256
conv_kernel_size: 3
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 4
n_src: 2
norm_type: gLN
skip_chan: 256
optim:
lr: 0.0003
optimizer: adam
weight_decay: 0.0
positional arguments:
training:
batch_size: 12
early_stop: True
epochs: 50
half_lr: True
num_workers: 12
```
### Results:
```yaml
si_sdr: 14.018196157142519
si_sdr_imp: 14.017103133809577
sdr: 14.498517291333885
sdr_imp: 14.463389151567865
sir: 24.149634529133372
sir_imp: 24.11450638936735
sar: 15.338597389045935
sar_imp: -137.30634122401517
stoi: 0.7639416744417206
stoi_imp: 0.1843383526963759
```
### License notice:
This work "ConvTasNet_DAMP-VSEP_enhboth" is a derivative of DAMP-VSEP: Smule Digital Archive of Mobile Performances - Vocal Separation (Version 1.0.1) by Smule, Inc, used under Smule's Research Data License Agreement (Research only). "ConvTasNet_DAMP-VSEP_enhboth" is licensed under Attribution-ShareAlike 3.0 Unported by Gerardo Roa Dabike.
|
hugggof/ConvTasNet_Libri3Mix_sepnoisy_16k | hugggof | 2021-10-19T19:26:57Z | 0 | 1 | null | [
"audacity",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- audacity
inference: false
---
This is an Audacity wrapper for the model, forked from the repository `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k`,
This model was trained using the Asteroid library: https://github.com/asteroid-team/asteroid.
The following info was copied directly from `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k`:
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 16000
segment: 3
task: sep_noisy
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri3Mix min test set :
```yml
si_sdr: 5.926151147554517
si_sdr_imp: 10.282912158535625
sdr: 6.700975236867358
sdr_imp: 10.882972447337504
sir: 15.364110064569388
sir_imp: 18.574476587171688
sar: 7.918866830474568
sar_imp: -0.9638973409971135
stoi: 0.7713777027310713
stoi_imp: 0.2078696167973911
```
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/).
"ConvTasNet_Libri3Mix_sepnoisy_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
hugggof/ConvTasNet_WHAM_sepclean | hugggof | 2021-10-19T19:25:37Z | 0 | 0 | null | [
"audacity",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- audacity
inference: false
---
This is an Audacity wrapper for the model, forked from the repository mpariente/ConvTasNet_WHAM_sepclean,
This model was trained using the Asteroid library: https://github.com/asteroid-team/asteroid.
The following info was copied from `mpariente/ConvTasNet_WHAM_sepclean`:
### Description:
This model was trained by Manuel Pariente
using the wham/ConvTasNet recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the WHAM! dataset.
### Training config:
```yaml
data:
n_src: 2
mode: min
nondefault_nsrc: None
sample_rate: 8000
segment: 3
task: sep_clean
train_dir: data/wav8k/min/tr/
valid_dir: data/wav8k/min/cv/
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
main_args:
exp_dir: exp/wham
gpus: -1
help: None
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 2
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
positional arguments:
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 4
```
### Results:
```yaml
si_sdr: 16.21326632846293
si_sdr_imp: 16.21441705664987
sdr: 16.615180021738933
sdr_imp: 16.464137807433435
sir: 26.860503975131923
sir_imp: 26.709461760826414
sar: 17.18312813480803
sar_imp: -131.99332048277296
stoi: 0.9619940905157323
stoi_imp: 0.2239480672473015
```
### License notice:
This work "ConvTasNet_WHAM!_sepclean" is a derivative of [CSR-I (WSJ0) Complete](https://catalog.ldc.upenn.edu/LDC93S6A)
by [LDC](https://www.ldc.upenn.edu/), used under [LDC User Agreement for
Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) (Research only).
"ConvTasNet_WHAM!_sepclean" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Manuel Pariente. |
huggingtweets/gerardsans | huggingtweets | 2021-10-19T19:13:05Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/gerardsans/1634670781074/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1431241007421665284/qoHnns8I_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">แธGerardSans/แณ๐คฃ๐ฌ๐ง</div>
<div style="text-align: center; font-size: 14px;">@gerardsans</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from แธGerardSans/แณ๐คฃ๐ฌ๐ง.
| Data | แธGerardSans/แณ๐คฃ๐ฌ๐ง |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 648 |
| Short tweets | 586 |
| Tweets kept | 2016 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/115pr1rh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gerardsans's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10heg4by) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10heg4by/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gerardsans')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
yazdipour/text-to-sparql-t5-base | yazdipour | 2021-10-19T18:16:39Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-base-2021-10-19_15-35_lastDS
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.3275993764400482
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-base-2021-10-19_15-35_lastDS
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1310
- Gen Len: 19.0
- P: 0.5807
- R: 0.0962
- F1: 0.3276
- Score: 6.4533
- Bleu-precisions: [92.48113990507008, 85.38781447185119, 80.57856404313097, 77.37314727416516]
- Bleu-bp: 0.0770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| nan | 1.0 | 4807 | 0.1310 | 19.0 | 0.5807 | 0.0962 | 0.3276 | 6.4533 | [92.48113990507008, 85.38781447185119, 80.57856404313097, 77.37314727416516] | 0.0770 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
maxspaziani/bert-base-italian-xxl-uncased-finetuned-ComunaliRoma | maxspaziani | 2021-10-19T17:58:13Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert-base-italian-xxl-uncased-finetuned-ComunaliRoma
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-italian-xxl-uncased-finetuned-ComunaliRoma
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-uncased](https://huggingface.co/dbmdz/bert-base-italian-xxl-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6717 | 1.0 | 1014 | 2.6913 |
| 2.4869 | 2.0 | 2028 | 2.5843 |
| 2.3411 | 3.0 | 3042 | 2.5095 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
doc2query/stackexchange-t5-base-v1 | doc2query | 2021-10-19T16:26:19Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl
widget:
- text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
license: apache-2.0
---
# doc2query/stackexchange-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/stackexchange-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 449k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, best_answer_pairs) from StackExchange.
|
Recognai/selectra_small | Recognai | 2021-10-19T15:28:17Z | 6 | 5 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"es",
"dataset:oscar",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
language:
- es
thumbnail: "url to a thumbnail used in social sharing"
license: apache-2.0
datasets:
- oscar
---
# SELECTRA: A Spanish ELECTRA
SELECTRA is a Spanish pre-trained language model based on [ELECTRA](https://github.com/google-research/electra).
We release a `small` and `medium` version with the following configuration:
| Model | Layers | Embedding/Hidden Size | Params | Vocab Size | Max Sequence Length | Cased |
| --- | --- | --- | --- | --- | --- | --- |
| **SELECTRA small** | **12** | **256** | **22M** | **50k** | **512** | **True** |
| [SELECTRA medium](https://huggingface.co/Recognai/selectra_medium) | 12 | 384 | 41M | 50k | 512 | True |
**SELECTRA small (medium) is about 5 (3) times smaller than BETO but achieves comparable results** (see Metrics section below).
## Usage
From the original [ELECTRA model card](https://huggingface.co/google/electra-small-discriminator): "ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN."
The discriminator should therefore activate the logit corresponding to the fake input token, as the following example demonstrates:
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
discriminator = ElectraForPreTraining.from_pretrained("Recognai/selectra_small")
tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small")
sentence_with_fake_token = "Estamos desayunando pan rosa con tomate y aceite de oliva."
inputs = tokenizer.encode(sentence_with_fake_token, return_tensors="pt")
logits = discriminator(inputs).logits.tolist()[0]
print("\t".join(tokenizer.tokenize(sentence_with_fake_token)))
print("\t".join(map(lambda x: str(x)[:4], logits[1:-1])))
"""Output:
Estamos desayun ##ando pan rosa con tomate y aceite de oliva .
-3.1 -3.6 -6.9 -3.0 0.19 -4.5 -3.3 -5.1 -5.7 -7.7 -4.4 -4.2
"""
```
However, you probably want to use this model to fine-tune it on a downstream task.
We provide models fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which can be used together with the zero-shot classification pipeline:
- [Zero-shot SELECTRA small](https://huggingface.co/Recognai/zeroshot_selectra_small)
- [Zero-shot SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium)
## Metrics
We fine-tune our models on 3 different down-stream tasks:
- [XNLI](https://huggingface.co/datasets/xnli)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [CoNLL2002 - NER](https://huggingface.co/datasets/conll2002)
For each task, we conduct 5 trials and state the mean and standard deviation of the metrics in the table below.
To compare our results to other Spanish language models, we provide the same metrics taken from the [evaluation table](https://github.com/PlanTL-SANIDAD/lm-spanish#evaluation-) of the [Spanish Language Model](https://github.com/PlanTL-SANIDAD/lm-spanish) repo.
| Model | CoNLL2002 - NER (f1) | PAWS-X (acc) | XNLI (acc) | Params |
| --- | --- | --- | --- | --- |
| SELECTRA small | 0.865 +- 0.004 | 0.896 +- 0.002 | 0.784 +- 0.002 | 22M |
| SELECTRA medium | 0.873 +- 0.003 | 0.896 +- 0.002 | 0.804 +- 0.002 | 41M |
| | | | | |
| [mBERT](https://huggingface.co/bert-base-multilingual-cased) | 0.8691 | 0.8955 | 0.7876 | 178M |
| [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) | 0.8759 | 0.9000 | 0.8130 | 110M |
| [RoBERTa-b](https://huggingface.co/BSC-TeMU/roberta-base-bne) | 0.8851 | 0.9000 | 0.8016 | 125M |
| [RoBERTa-l](https://huggingface.co/BSC-TeMU/roberta-large-bne) | 0.8772 | 0.9060 | 0.7958 | 355M |
| [Bertin](https://huggingface.co/bertin-project/bertin-roberta-base-spanish/tree/v1-512) | 0.8835 | 0.8990 | 0.7890 | 125M |
| [ELECTRICIDAD](https://huggingface.co/mrm8488/electricidad-base-discriminator) | 0.7954 | 0.9025 | 0.7878 | 109M |
Some details of our fine-tuning runs:
- epochs: 5
- batch-size: 32
- learning rate: 1e-4
- warmup proportion: 0.1
- linear learning rate decay
- layerwise learning rate decay
For all the details, check out our [selectra repo](https://github.com/recognai/selectra).
## Training
We pre-trained our SELECTRA models on the Spanish portion of the [Oscar](https://huggingface.co/datasets/oscar) dataset, which is about 150GB in size.
Each model version is trained for 300k steps, with a warm restart of the learning rate after the first 150k steps.
Some details of the training:
- steps: 300k
- batch-size: 128
- learning rate: 5e-4
- warmup steps: 10k
- linear learning rate decay
- TPU cores: 8 (v2-8)
For all details, check out our [selectra repo](https://github.com/recognai/selectra).
**Note:** Due to a misconfiguration in the pre-training scripts the embeddings of the vocabulary containing an accent were not optimized. If you fine-tune this model on a down-stream task, you might consider using a tokenizer that does not strip the accents:
```python
tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small", strip_accents=False)
```
## Motivation
Despite the abundance of excellent Spanish language models (BETO, BSC-BNE, Bertin, ELECTRICIDAD, etc.), we felt there was still a lack of distilled or compact Spanish language models and a lack of comparing those to their bigger siblings.
## Acknowledgment
This research was supported by the Google TPU Research Cloud (TRC) program.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Javier Lopez ([GitHub](https://github.com/javispp))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon)) |
Fhrozen/test_an4 | Fhrozen | 2021-10-19T15:20:32Z | 3 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:an4",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- an4
license: cc-by-4.0
---
## ESPnet2 ASR model
### `Fhrozen/test_an4`
This model was trained by Fhrozen using an4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b8df4c928e132acff78d196988bdb68a66987952
pip install -e .
cd egs2/an4/asr1
./run.sh --skip_data_prep false --skip_train true --download_model Fhrozen/test_an4
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Oct 20 00:00:46 JST 2021`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.4a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: `b8df4c928e132acff78d196988bdb68a66987952`
- Commit date: `Tue Oct 19 07:48:11 2021 -0400`
## asr_train_raw_en_bpe30
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/test|130|773|4.0|22.3|73.7|0.1|96.1|100.0|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/train_dev|100|591|2.7|21.8|75.5|0.0|97.3|100.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/test|130|2565|17.2|16.4|66.4|1.0|83.8|100.0|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/train_dev|100|1915|15.5|16.4|68.1|0.9|85.5|100.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/test|130|2695|21.1|15.6|63.3|0.9|79.9|100.0|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/train_dev|100|2015|19.4|15.6|65.0|0.9|81.5|100.0|
## ASR config
<details><summary>expand</summary>
```
config: null
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_raw_en_bpe30
ngpu: 0
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: null
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 40
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models:
- 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe30/train/speech_shape
- exp/asr_stats_raw_en_bpe30/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe30/valid/speech_shape
- exp/asr_stats_raw_en_bpe30/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_nodev/wav.scp
- speech
- sound
- - dump/raw/train_nodev/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/train_dev/wav.scp
- speech
- sound
- - dump/raw/train_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adadelta
optim_conf: {}
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- โ
- T
- E
- O
- R
- Y
- A
- H
- U
- S
- I
- F
- B
- L
- P
- D
- G
- M
- C
- V
- X
- J
- K
- Z
- W
- N
- Q
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.5
ignore_id: -1
lsm_weight: 0.0
length_normalized_loss: false
report_cer: true
report_wer: true
sym_space: <space>
sym_blank: <blank>
extract_feats_in_collect_stats: true
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram30/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe30/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: rnn
encoder_conf: {}
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf: {}
required:
- output_dir
- token_list
version: 0.10.4a1
distributed: false
```
</details>
## LM config
<details><summary>expand</summary>
```
config: conf/train_lm.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/lm_train_lm_en_bpe30
ngpu: 0
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: null
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 40
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 1
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 256
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/lm_stats_en_bpe30/train/text_shape.bpe
valid_shape_file:
- exp/lm_stats_en_bpe30/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/lm_train.txt
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/train_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.1
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- โ
- T
- E
- O
- R
- Y
- A
- H
- U
- S
- I
- F
- B
- L
- P
- D
- G
- M
- C
- V
- X
- J
- K
- Z
- W
- N
- Q
- <sos/eos>
init: null
model_conf:
ignore_id: 0
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram30/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
lm: seq_rnn
lm_conf:
unit: 650
nlayers: 2
required:
- output_dir
- token_list
version: 0.10.4a1
distributed: false
```
</details>
|
patrickvonplaten/wav2vec2-large-xlsr-turkish-demo | patrickvonplaten | 2021-10-19T14:00:49Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ## XLSR-Wav2Vec2 Fine-Tuned on Turkish Common Voice dataset
The model was fine-tuned in a google colab for demonstration purposes.
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for more information about the model. |
soikit/distilgpt2-finetuned-wikitext2 | soikit | 2021-10-19T13:23:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
doc2query/all-with_prefix-t5-base-v1 | doc2query | 2021-10-19T12:52:47Z | 1,990 | 10 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:sentence-transformers/reddit-title-body",
"dataset:sentence-transformers/embedding-training-data",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- sentence-transformers/reddit-title-body
- sentence-transformers/embedding-training-data
widget:
- text: "text2reddit: Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
license: apache-2.0
---
# doc2query/all-with_prefix-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/all-with_prefix-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
prefix = "answer2question"
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
text = prefix+": "+text
input_ids = tokenizer.encode(text, max_length=384, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 575k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a large collection of datasets. For the exact datasets names and weights see the `data_config.json` in this repository. Most of the datasets are available at [https://huggingface.co/sentence-transformers](https://huggingface.co/sentence-transformers).
The datasets include besides others:
- (title, body) pairs from [Reddit](https://huggingface.co/datasets/sentence-transformers/reddit-title-body)
- (title, body) pairs and (title, answer) pairs from StackExchange and Yahoo Answers!
- (title, review) pairs from Amazon reviews
- (query, paragraph) pairs from MS MARCO, NQ, and GooAQ
- (question, duplicate_question) from Quora and WikiAnswers
- (title, abstract) pairs from S2ORC
## Prefix
This model was trained **with a prefix**: You start the text with a specific index that defines what type out output text you would like to receive. Depending on the prefix, the output is different.
E.g. the above text about Python produces the following output:
| Prefix | Output |
| --- | --- |
| answer2question | Why should I use python in my business? ; What is the difference between Python and.NET? ; what is the python design philosophy? |
| review2title | Python a powerful and useful language ; A new and improved programming language ; Object-oriented, practical and accessibl |
| abstract2title | Python: A Software Development Platform ; A Research Guide for Python X: Conceptual Approach to Programming ; Python : Language and Approach |
| text2query | is python a low level language? ; what is the primary idea of python? ; is python a programming language? |
These are all available pre-fixes:
- text2reddit
- question2title
- answer2question
- abstract2title
- review2title
- news2title
- text2query
- question2question
For the datasets and weights for the different pre-fixes see `data_config.json` in this repository.
|
Jeska/autonlp-vaccinfaq-22144706 | Jeska | 2021-10-19T12:33:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"unk",
"dataset:Jeska/autonlp-data-vaccinfaq",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- Jeska/autonlp-data-vaccinfaq
co2_eq_emissions: 27.135492487925884
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 22144706
- CO2 Emissions (in grams): 27.135492487925884
## Validation Metrics
- Loss: 1.81697416305542
- Accuracy: 0.6377269139700079
- Macro F1: 0.5181293370145044
- Micro F1: 0.6377269139700079
- Weighted F1: 0.631117826235572
- Macro Precision: 0.5371452512845428
- Micro Precision: 0.6377269139700079
- Weighted Precision: 0.6655055695465463
- Macro Recall: 0.5609328178925124
- Micro Recall: 0.6377269139700079
- Weighted Recall: 0.6377269139700079
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Jeska/autonlp-vaccinfaq-22144706
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Jeska/autonlp-vaccinfaq-22144706", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Jeska/autonlp-vaccinfaq-22144706", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
Emanuel/autonlp-pos-tag-bosque | Emanuel | 2021-10-19T12:09:29Z | 19 | 3 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autonlp",
"pt",
"dataset:Emanuel/autonlp-data-pos-tag-bosque",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
tags: autonlp
language: pt
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- Emanuel/autonlp-data-pos-tag-bosque
co2_eq_emissions: 6.2107269129101805
---
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 21124427
- CO2 Emissions (in grams): 6.2107269129101805
## Validation Metrics
- Loss: 0.09813392907381058
- Accuracy: 0.9714309035997062
- Precision: 0.9721275936822545
- Recall: 0.9735345807918949
- F1: 0.9728305785123967
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Emanuel/autonlp-pos-tag-bosque-21124427
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("Emanuel/autonlp-pos-tag-bosque")
tokenizer = AutoTokenizer.from_pretrained("Emanuel/autonlp-pos-tag-bosque")
inputs = tokenizer("A noiva casa de branco", return_tensors="pt")
outputs = model(**inputs)
labelids = outputs.logits.squeeze().argmax(axis=-1)
labels = [model.config.id2label[int(x)] for x in labelids]
labels = labels[1:-1]# Filter start and end of sentence symbols
``` |
yazdipour/text-to-sparql-t5-small | yazdipour | 2021-10-19T11:17:46Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-small-2021-10-19_10-17_lastDS
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.3129461705684662
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-19_10-17_lastDS
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2335
- Gen Len: 19.0
- P: 0.5580
- R: 0.0884
- F1: 0.3129
- Score: 5.9585
- Bleu-precisions: [90.11303396628615, 80.34125695971072, 73.81487011728768, 69.48796722990271]
- Bleu-bp: 0.0763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| 0.3166 | 1.0 | 4807 | 0.2335 | 19.0 | 0.5580 | 0.0884 | 0.3129 | 5.9585 | [90.11303396628615, 80.34125695971072, 73.81487011728768, 69.48796722990271] | 0.0763 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
DeepESP/gpt2-spanish | DeepESP | 2021-10-19T08:52:48Z | 5,155 | 36 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"GPT-2",
"Spanish",
"ebooks",
"nlg",
"es",
"dataset:ebooks",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
language: es
tags:
- GPT-2
- Spanish
- ebooks
- nlg
datasets:
- ebooks
widget:
- text: "Quisiera saber que va a suceder"
license: mit
---
# GPT2-Spanish
GPT2-Spanish is a language generation model trained from scratch with 11.5GB of Spanish texts and with a Byte Pair Encoding (BPE) tokenizer that was trained for this purpose. The parameters used are the same as the small version of the original OpenAI GPT2 model.
## Corpus
This model was trained with a corpus of 11.5GB of texts corresponding to 3.5GB of Wikipedia articles and 8GB of books (narrative, short stories, theater, poetry, essays, and popularization).
## Tokenizer
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for Unicode characters) and a vocabulary size of 50257. The inputs are sequences of 1024 consecutive tokens.
This tokenizer was trained from scratch with the Spanish corpus, since it was evidenced that the tokenizer of the English models presented limitations to capture the semantic relations of Spanish, due to the morphosyntactic differences between both languages.
Apart from the special token "<|endoftext|>" for text ending in the OpenAI GPT-2 models, the tokens "<|talk|>", "<|ax1|>", "<|ax2|>" (..)"<|ax9|>" were included so that they can serve as prompts in future training.
## Training
The model and tokenizer were trained using the Hugging Face libraries with an Nvidia Tesla V100 GPU with 16GB memory on Google Colab servers.
## Authors
The model was trained by Alejandro Oรฑate Latorre (Spain) and Jorge Ortiz Fuentes (Chile), members of -Deep ESP-, an open-source community on Natural Language Processing in Spanish (https://t.me/joinchat/VoEp1bPrDYEexc6h).
Thanks to the members of the community who collaborated with funding for the initial tests.
## Cautions
The model generates texts according to the patterns learned in the training corpus. These data were not filtered, therefore, the model could generate offensive or discriminatory content.
|
yazdipour/sparql-qald9-t5-small-2021-10-19_07-12_RAW | yazdipour | 2021-10-19T07:25:13Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: sparql-qald9-t5-small-2021-10-19_07-12_RAW
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparql-qald9-t5-small-2021-10-19_07-12_RAW
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:----------------------------------------------------------------------------:|:-------:|
| No log | 1.0 | 51 | 2.8581 | 19.0 | 0.3301 | 0.0433 | 0.1830 | 7.5917 | [69.82603479304139, 45.68226763348714, 32.33357717629846, 24.56861133935908] | 0.1903 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Tarang1998/autonlp-pegasus-21664560 | Tarang1998 | 2021-10-19T05:22:41Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"unk",
"dataset:Tarang1998/autonlp-data-pegasus",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- Tarang1998/autonlp-data-pegasus
co2_eq_emissions: 5.680803958729511
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 21664560
- CO2 Emissions (in grams): 5.680803958729511
## Validation Metrics
- Loss: 1.7488420009613037
- Rouge1: 38.1491
- Rouge2: 18.6257
- RougeL: 26.8448
- RougeLsum: 32.2433
- Gen Len: 49.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Tarang1998/autonlp-pegasus-21664560
``` |
Subsets and Splits