modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-04 18:27:18
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
468 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-04 18:26:45
card
stringlengths
11
1.01M
yanaiela/roberta-base-epoch_31
yanaiela
2022-07-29T22:50:29Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_31", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T17:14:05Z
--- language: en tags: - roberta-base - roberta-base-epoch_31 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 31 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_31. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
yanaiela/roberta-base-epoch_30
yanaiela
2022-07-29T22:50:11Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_30", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T17:13:21Z
--- language: en tags: - roberta-base - roberta-base-epoch_30 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 30 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_30. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
yanaiela/roberta-base-epoch_28
yanaiela
2022-07-29T22:49:33Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_28", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T17:11:20Z
--- language: en tags: - roberta-base - roberta-base-epoch_28 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 28 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_28. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
yanaiela/roberta-base-epoch_23
yanaiela
2022-07-29T22:48:00Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_23", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T17:07:39Z
--- language: en tags: - roberta-base - roberta-base-epoch_23 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 23 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_23. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
yanaiela/roberta-base-epoch_21
yanaiela
2022-07-29T22:47:23Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_21", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T17:06:01Z
--- language: en tags: - roberta-base - roberta-base-epoch_21 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 21 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_21. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
yanaiela/roberta-base-epoch_19
yanaiela
2022-07-29T22:46:46Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_19", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T17:04:23Z
--- language: en tags: - roberta-base - roberta-base-epoch_19 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 19 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_19. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
yanaiela/roberta-base-epoch_18
yanaiela
2022-07-29T22:46:26Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_18", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T17:03:29Z
--- language: en tags: - roberta-base - roberta-base-epoch_18 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 18 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_18. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
yanaiela/roberta-base-epoch_17
yanaiela
2022-07-29T22:46:08Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_17", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T17:02:47Z
--- language: en tags: - roberta-base - roberta-base-epoch_17 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 17 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_17. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
yanaiela/roberta-base-epoch_15
yanaiela
2022-07-29T22:45:30Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_15", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T17:01:23Z
--- language: en tags: - roberta-base - roberta-base-epoch_15 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 15 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_15. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
yanaiela/roberta-base-epoch_13
yanaiela
2022-07-29T22:44:53Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_13", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T16:59:49Z
--- language: en tags: - roberta-base - roberta-base-epoch_13 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 13 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_13. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
yanaiela/roberta-base-epoch_12
yanaiela
2022-07-29T22:44:35Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_12", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T16:59:07Z
--- language: en tags: - roberta-base - roberta-base-epoch_12 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 12 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_12. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
yanaiela/roberta-base-epoch_8
yanaiela
2022-07-29T22:43:21Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_8", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T16:55:28Z
--- language: en tags: - roberta-base - roberta-base-epoch_8 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 8 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_8. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
yanaiela/roberta-base-epoch_6
yanaiela
2022-07-29T22:42:43Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_6", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T16:53:54Z
--- language: en tags: - roberta-base - roberta-base-epoch_6 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 6 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_6. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
yanaiela/roberta-base-epoch_5
yanaiela
2022-07-29T22:42:26Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_5", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T16:53:10Z
--- language: en tags: - roberta-base - roberta-base-epoch_5 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 5 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_5. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
yanaiela/roberta-base-epoch_1
yanaiela
2022-07-29T22:41:07Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_1", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T16:49:55Z
--- language: en tags: - roberta-base - roberta-base-epoch_1 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 1 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_1. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
yanaiela/roberta-base-epoch_0
yanaiela
2022-07-29T22:38:30Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "roberta-base", "roberta-base-epoch_0", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:1907.11692", "arxiv:2207.14251", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T18:33:20Z
--- language: en tags: - roberta-base - roberta-base-epoch_0 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 0 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_0. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
platzi/platzi-distilroberta-base-mrpc-glue-omar-espejel
platzi
2022-07-29T21:57:21Z
15
1
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-29T12:17:21Z
--- license: apache-2.0 tags: - text-classification - generated_from_trainer datasets: - glue metrics: - accuracy - f1 widget: - text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.","Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."] example_title: Not Equivalent - text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.", "With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."] example_title: Equivalent model-index: - name: platzi-distilroberta-base-mrpc-glue-omar-espejel results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: train args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8431372549019608 - name: F1 type: f1 value: 0.8861209964412811 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-distilroberta-base-mrpc-glue-omar-espejel This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets. It achieves the following results on the evaluation set: - Loss: 0.6332 - Accuracy: 0.8431 - F1: 0.8861 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5076 | 1.09 | 500 | 0.7464 | 0.8137 | 0.8671 | | 0.3443 | 2.18 | 1000 | 0.6332 | 0.8431 | 0.8861 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
mrm8488/q-Taxi-v3
mrm8488
2022-07-29T21:37:20Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-29T20:43:55Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="mrm8488/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
jungjongho/wav2vec2-large-xlsr-korean-demo-colab_epoch15
jungjongho
2022-07-29T21:25:56Z
3
1
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-29T16:39:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xlsr-korean-demo-colab_epoch15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-korean-demo-colab_epoch15 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4133 - Wer: 0.3801 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 16.9017 | 0.8 | 400 | 4.6273 | 1.0 | | 4.4633 | 1.6 | 800 | 4.4419 | 1.0 | | 4.2262 | 2.4 | 1200 | 3.8477 | 0.9994 | | 2.4402 | 3.21 | 1600 | 1.3564 | 0.8111 | | 1.3499 | 4.01 | 2000 | 0.9070 | 0.6664 | | 0.9922 | 4.81 | 2400 | 0.7496 | 0.6131 | | 0.8271 | 5.61 | 2800 | 0.6240 | 0.5408 | | 0.6918 | 6.41 | 3200 | 0.5506 | 0.5026 | | 0.6015 | 7.21 | 3600 | 0.5303 | 0.4935 | | 0.5435 | 8.02 | 4000 | 0.4951 | 0.4696 | | 0.4584 | 8.82 | 4400 | 0.4677 | 0.4432 | | 0.4258 | 9.62 | 4800 | 0.4602 | 0.4307 | | 0.3906 | 10.42 | 5200 | 0.4456 | 0.4195 | | 0.3481 | 11.22 | 5600 | 0.4265 | 0.4062 | | 0.3216 | 12.02 | 6000 | 0.4241 | 0.4046 | | 0.2908 | 12.83 | 6400 | 0.4106 | 0.3941 | | 0.2747 | 13.63 | 6800 | 0.4146 | 0.3855 | | 0.2633 | 14.43 | 7200 | 0.4133 | 0.3801 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
jackoyoungblood/ppo-LunarLander-v2b
jackoyoungblood
2022-07-29T21:03:11Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-29T21:02:51Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 236.21 +/- 14.68 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jackoyoungblood/ppo-LunarLander-v2
jackoyoungblood
2022-07-29T20:49:52Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-29T15:34:52Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 261.42 +/- 23.22 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Atharvgarg/bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-extracted-sumy
Atharvgarg
2022-07-29T17:50:17Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "summarisation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-29T17:08:53Z
--- license: apache-2.0 tags: - summarisation - generated_from_trainer metrics: - rouge model-index: - name: bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-extracted-sumy results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-extracted-sumy This model is a fine-tuned version of [mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization](https://huggingface.co/mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3228 - Rouge1: 56.5706 - Rouge2: 43.0906 - Rougel: 47.9957 - Rougelsum: 53.417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 0.3226 | 1.0 | 223 | 0.3225 | 55.7639 | 41.9414 | 46.9804 | 52.5639 | | 0.262 | 2.0 | 446 | 0.3198 | 55.7522 | 42.0929 | 46.8388 | 52.6659 | | 0.2153 | 3.0 | 669 | 0.3195 | 55.7091 | 42.2111 | 47.2641 | 52.5765 | | 0.1805 | 4.0 | 892 | 0.3164 | 55.8115 | 42.5536 | 47.3529 | 52.7672 | | 0.1527 | 5.0 | 1115 | 0.3203 | 56.8658 | 43.4238 | 48.2268 | 53.8136 | | 0.14 | 6.0 | 1338 | 0.3234 | 55.7138 | 41.8562 | 46.8362 | 52.5201 | | 0.1252 | 7.0 | 1561 | 0.3228 | 56.5706 | 43.0906 | 47.9957 | 53.417 | | 0.1229 | 8.0 | 1784 | 0.3228 | 56.5706 | 43.0906 | 47.9957 | 53.417 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
andres-hsn/q-Taxi-v3
andres-hsn
2022-07-29T17:02:52Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-29T17:02:38Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.54 +/- 2.72 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="andres-hsn/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Datasaur/distilbert-base-uncased-finetuned-ag-news
Datasaur
2022-07-29T16:36:20Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "en", "dataset:ag-news", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-17T02:53:35Z
--- language: en license: apache-2.0 datasets: - ag-news ---
kdf/javascript-docstring-generation
kdf
2022-07-29T15:32:50Z
7
0
transformers
[ "transformers", "pytorch", "codegen", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-07-29T12:04:31Z
--- license: apache-2.0 widget: - text: "<|endoftext|>\nfunction getDateAfterNDay(n){\n return moment().add(n, 'day')\n}\n// docstring\n/**" --- ## Basic info model based [Salesforce/codegen-350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono) fine-tuned with data [codeparrot/github-code-clean](https://huggingface.co/datasets/codeparrot/github-code-clean) data filter by JavaScript and TypeScript ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_type = 'kdf/javascript-docstring-generation' tokenizer = AutoTokenizer.from_pretrained(model_type) model = AutoModelForCausalLM.from_pretrained(model_type) inputs = tokenizer('''<|endoftext|> function getDateAfterNDay(n){ return moment().add(n, 'day') } // docstring /**''', return_tensors='pt') doc_max_length = 128 generated_ids = model.generate( **inputs, max_length=inputs.input_ids.shape[1] + doc_max_length, do_sample=False, return_dict_in_generate=True, num_return_sequences=1, output_scores=True, pad_token_id=50256, eos_token_id=50256 # <|endoftext|> ) ret = tokenizer.decode(generated_ids.sequences[0], skip_special_tokens=False) print(ret) ``` ## Prompt You could give model a style or a specific language, for example: ```python inputs = tokenizer('''<|endoftext|> function add(a, b){ return a + b; } // docstring /** * Calculate number add. * @param a {number} the first number to add * @param b {number} the second number to add * @return the result of a + b */ <|endoftext|> function getDateAfterNDay(n){ return moment().add(n, 'day') } // docstring /**''', return_tensors='pt') doc_max_length = 128 generated_ids = model.generate( **inputs, max_length=inputs.input_ids.shape[1] + doc_max_length, do_sample=False, return_dict_in_generate=True, num_return_sequences=1, output_scores=True, pad_token_id=50256, eos_token_id=50256 # <|endoftext|> ) ret = tokenizer.decode(generated_ids.sequences[0], skip_special_tokens=False) print(ret) inputs = tokenizer('''<|endoftext|> function add(a, b){ return a + b; } // docstring /** * 计算数字相加 * @param a {number} 第一个加数 * @param b {number} 第二个加数 * @return 返回 a + b 的结果 */ <|endoftext|> function getDateAfterNDay(n){ return moment().add(n, 'day') } // docstring /**''', return_tensors='pt') doc_max_length = 128 generated_ids = model.generate( **inputs, max_length=inputs.input_ids.shape[1] + doc_max_length, do_sample=False, return_dict_in_generate=True, num_return_sequences=1, output_scores=True, pad_token_id=50256, eos_token_id=50256 # <|endoftext|> ) ret = tokenizer.decode(generated_ids.sequences[0], skip_special_tokens=False) print(ret) ```
Lovesaif/bert-finetuned-squad
Lovesaif
2022-07-29T15:14:15Z
3
0
transformers
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-07-27T03:19:59Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Lovesaif/bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Lovesaif/bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5635 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16638, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 1.2643 | 0 | | 0.7787 | 1 | | 0.5635 | 2 | ### Framework versions - Transformers 4.21.0 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
phjhk/hklegal-xlm-r-base-t
phjhk
2022-07-29T14:53:09Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:1911.02116", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-26T16:41:57Z
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh --- # Model Description The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) fine-tuned with the [conll2003](https://huggingface.co/datasets/conll2003) dataset in English. - **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116) - **Model type:** Multi-lingual language model - **Language(s) (NLP) or Countries (images):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in English - **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm) - **Parent Model:** [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) Hong Kong Legal Information Institute [HKILL](https://www.hklii.hk/eng/) is a free, independent, non-profit document database providing the public with legal information relating to Hong Kong. We finetune the XLM-RoBERTa on the HKILL datasets. It contains docments # Uses The model is a pretrained-finetuned language model. The model can be used for document classification, Named Entity Recognition (NER), especially on legal domain. ```python >>> from transformers import pipeline,AutoTokenizer,AutoModelForTokenClassification >>> tokenizer = AutoTokenizer.from_pretrained("hklegal-xlm-r-base-t") >>> model = AutoModelForTokenClassification.from_pretrained("hklegal-xlm-r-base-t") >>> classifier = pipeline("ner", model=model, tokenizer=tokenizer) >>> classifier("Alya told Jasmine that Andrew could pay with cash..") ``` # Citation **BibTeX:** ```bibtex @article{conneau2019unsupervised, title={Unsupervised Cross-lingual Representation Learning at Scale}, author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin}, journal={arXiv preprint arXiv:1911.02116}, year={2019} } ```
phjhk/hklegal-xlm-r-base
phjhk
2022-07-29T14:52:30Z
6
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:1911.02116", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-26T15:52:19Z
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh --- # Model Description The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) fine-tuned with the [conll2003](https://huggingface.co/datasets/conll2003) dataset in English. - **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116) - **Model type:** Multi-lingual language model - **Language(s) (NLP) or Countries (images):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in English - **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm) - **Parent Model:** [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) Hong Kong Legal Information Institute [HKILL](https://www.hklii.hk/eng/) is a free, independent, non-profit document database providing the public with legal information relating to Hong Kong. We finetune the XLM-RoBERTa on the HKILL datasets. It contains docments # Uses The model is a pretrained-finetuned language model. The model can be used for document classification, Named Entity Recognition (NER), especially on legal domain. ```python >>> from transformers import pipeline,AutoTokenizer,AutoModelForTokenClassification >>> tokenizer = AutoTokenizer.from_pretrained("hklegal-xlm-r-base") >>> model = AutoModelForTokenClassification.from_pretrained("hklegal-xlm-r-base") >>> classifier = pipeline("ner", model=model, tokenizer=tokenizer) >>> classifier("Alya told Jasmine that Andrew could pay with cash..") ``` # Citation **BibTeX:** ```bibtex @article{conneau2019unsupervised, title={Unsupervised Cross-lingual Representation Learning at Scale}, author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin}, journal={arXiv preprint arXiv:1911.02116}, year={2019} } ```
phjhk/hklegal-xlm-r-large
phjhk
2022-07-29T14:51:34Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:1911.02116", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-29T14:29:20Z
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh --- # Model Description The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) fine-tuned with the [conll2003](https://huggingface.co/datasets/conll2003) dataset in English. - **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116) - **Model type:** Multi-lingual language model - **Language(s) (NLP) or Countries (images):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in English - **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm) - **Parent Model:** [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) Hong Kong Legal Information Institute [HKILL](https://www.hklii.hk/eng/) is a free, independent, non-profit document database providing the public with legal information relating to Hong Kong. We finetune the XLM-RoBERTa on the HKILL datasets. It contains docments # Uses The model is a pretrained-finetuned language model. The model can be used for document classification, Named Entity Recognition (NER), especially on legal domain. ```python >>> from transformers import pipeline,AutoTokenizer,AutoModelForTokenClassification >>> tokenizer = AutoTokenizer.from_pretrained("hklegal-xlm-r-large") >>> model = AutoModelForTokenClassification.from_pretrained("hklegal-xlm-r-large") >>> classifier = pipeline("ner", model=model, tokenizer=tokenizer) >>> classifier("Alya told Jasmine that Andrew could pay with cash..") ``` # Citation **BibTeX:** ```bibtex @article{conneau2019unsupervised, title={Unsupervised Cross-lingual Representation Learning at Scale}, author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin}, journal={arXiv preprint arXiv:1911.02116}, year={2019} } ```
phjhk/hklegal-xlm-r-large-t
phjhk
2022-07-29T14:50:13Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:1911.02116", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-26T17:14:00Z
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh --- # Model Description The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) fine-tuned with the [conll2003](https://huggingface.co/datasets/conll2003) dataset in English. - **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116) - **Model type:** Multi-lingual language model - **Language(s) (NLP) or Countries (images):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in English - **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm) - **Parent Model:** [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) Hong Kong Legal Information Institute [HKILL](https://www.hklii.hk/eng/) is a free, independent, non-profit document database providing the public with legal information relating to Hong Kong. We finetune the XLM-RoBERTa on the HKILL datasets. It contains docments # Uses The model is a pretrained-finetuned language model. The model can be used for document classification, Named Entity Recognition (NER), especially on legal domain. ```python >>> from transformers import pipeline,AutoTokenizer,AutoModelForTokenClassification >>> tokenizer = AutoTokenizer.from_pretrained("hklegal-xlm-r-large-t") >>> model = AutoModelForTokenClassification.from_pretrained("hklegal-xlm-r-large-t") >>> classifier = pipeline("ner", model=model, tokenizer=tokenizer) >>> classifier("Alya told Jasmine that Andrew could pay with cash..") ``` # Citation **BibTeX:** ```bibtex @article{conneau2019unsupervised, title={Unsupervised Cross-lingual Representation Learning at Scale}, author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin}, journal={arXiv preprint arXiv:1911.02116}, year={2019} } ```
silviacamplani/distilbert-uncase-direct-finetuning-ai-ner_3labels
silviacamplani
2022-07-29T14:41:55Z
3
0
transformers
[ "transformers", "tf", "distilbert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-29T14:33:10Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: silviacamplani/distilbert-uncase-direct-finetuning-ai-ner_3labels results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # silviacamplani/distilbert-uncase-direct-finetuning-ai-ner_3labels This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6593 - Validation Loss: 0.6130 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 60, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.9721 | 1.8113 | 0 | | 1.6564 | 1.5052 | 1 | | 1.3640 | 1.2332 | 2 | | 1.1078 | 0.9996 | 3 | | 0.9158 | 0.8249 | 4 | | 0.7850 | 0.7188 | 5 | | 0.7135 | 0.6595 | 6 | | 0.6822 | 0.6310 | 7 | | 0.6394 | 0.6171 | 8 | | 0.6593 | 0.6130 | 9 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.12.1
platzi/platzi-bert-base-mrpc-glue-omar-espejel
platzi
2022-07-29T13:50:27Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-29T13:37:08Z
--- license: apache-2.0 tags: - text-classification - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: platzi-bert-base-mrpc-glue-omar-espejel results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: train args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8578431372549019 - name: F1 type: f1 value: 0.8941605839416058 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-bert-base-mrpc-glue-omar-espejel This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue and the mrpc datasets. It achieves the following results on the evaluation set: - Loss: 0.4366 - Accuracy: 0.8578 - F1: 0.8942 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5221 | 1.09 | 500 | 0.4366 | 0.8578 | 0.8942 | | 0.3114 | 2.18 | 1000 | 0.6581 | 0.8725 | 0.9113 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
raisin2402/marian-finetuned-kde4-en-to-fr
raisin2402
2022-07-29T12:59:05Z
3
1
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-07-29T11:08:39Z
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: en-fr split: train args: en-fr metrics: - name: Bleu type: bleu value: 52.83113187001415 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8560 - Bleu: 52.8311 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
marii/lunarlander
marii
2022-07-29T12:31:04Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-29T09:25:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 278.03 +/- 20.09 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
turhancan97/dqn-SpaceInvadersNoFrameskip-v4
turhancan97
2022-07-29T12:12:16Z
4
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-29T12:11:45Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: 424.00 +/- 124.70 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga turhancan97 -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga turhancan97 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 500000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
AlbertShu/Reinforce-v1
AlbertShu
2022-07-29T11:26:16Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-07-29T11:26:01Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-v1 results: - metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
gazzehamine/wav2vec2-base-timit-demo-google-colab
gazzehamine
2022-07-29T10:53:20Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-15T14:10:29Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-google-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5707 - Wer: 0.3388 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.5072 | 1.0 | 500 | 1.8786 | 0.9741 | | 0.8836 | 2.01 | 1000 | 0.5147 | 0.5317 | | 0.4576 | 3.01 | 1500 | 0.4774 | 0.4591 | | 0.3056 | 4.02 | 2000 | 0.4393 | 0.4343 | | 0.2349 | 5.02 | 2500 | 0.4404 | 0.4022 | | 0.1946 | 6.02 | 3000 | 0.4564 | 0.3991 | | 0.1624 | 7.03 | 3500 | 0.4428 | 0.3947 | | 0.1421 | 8.03 | 4000 | 0.4312 | 0.3878 | | 0.131 | 9.04 | 4500 | 0.4345 | 0.3853 | | 0.1115 | 10.04 | 5000 | 0.4318 | 0.3753 | | 0.1024 | 11.04 | 5500 | 0.5053 | 0.3798 | | 0.0895 | 12.05 | 6000 | 0.5044 | 0.3782 | | 0.0856 | 13.05 | 6500 | 0.4893 | 0.3665 | | 0.0755 | 14.06 | 7000 | 0.4868 | 0.3662 | | 0.0724 | 15.06 | 7500 | 0.5084 | 0.3681 | | 0.0635 | 16.06 | 8000 | 0.5367 | 0.3530 | | 0.0603 | 17.07 | 8500 | 0.5255 | 0.3604 | | 0.0609 | 18.07 | 9000 | 0.5407 | 0.3678 | | 0.0486 | 19.08 | 9500 | 0.5312 | 0.3630 | | 0.047 | 20.08 | 10000 | 0.5498 | 0.3518 | | 0.0437 | 21.08 | 10500 | 0.5326 | 0.3571 | | 0.0379 | 22.09 | 11000 | 0.5644 | 0.3608 | | 0.035 | 23.09 | 11500 | 0.5956 | 0.3539 | | 0.0333 | 24.1 | 12000 | 0.5967 | 0.3517 | | 0.0289 | 25.1 | 12500 | 0.5274 | 0.3399 | | 0.0268 | 26.1 | 13000 | 0.5609 | 0.3406 | | 0.0256 | 27.11 | 13500 | 0.5451 | 0.3448 | | 0.0249 | 28.11 | 14000 | 0.5804 | 0.3413 | | 0.0236 | 29.12 | 14500 | 0.5707 | 0.3388 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
LanaKru/wikineural-multilingual-ner-finetuned-ner
LanaKru
2022-07-29T09:36:52Z
10
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:skript", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-29T04:14:38Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - skript metrics: - precision - recall - f1 - accuracy model-index: - name: wikineural-multilingual-ner-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: skript type: skript config: myscript split: train args: myscript metrics: - name: Precision type: precision value: 0.9007335298553506 - name: Recall type: recall value: 0.9301946902654867 - name: F1 type: f1 value: 0.9152270827528559 - name: Accuracy type: accuracy value: 0.9653644982020269 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wikineural-multilingual-ner-finetuned-ner This model is a fine-tuned version of [Babelscape/wikineural-multilingual-ner](https://huggingface.co/Babelscape/wikineural-multilingual-ner) on the skript dataset. It achieves the following results on the evaluation set: - Loss: 0.1243 - Precision: 0.9007 - Recall: 0.9302 - F1: 0.9152 - Accuracy: 0.9654 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 298 | 0.1179 | 0.8975 | 0.8981 | 0.8978 | 0.9592 | | 0.104 | 2.0 | 596 | 0.1161 | 0.9051 | 0.9201 | 0.9126 | 0.9648 | | 0.104 | 3.0 | 894 | 0.1243 | 0.9007 | 0.9302 | 0.9152 | 0.9654 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
SummerChiam/pond_image_classification_9
SummerChiam
2022-07-29T09:13:48Z
51
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-29T09:13:31Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: pond_image_classification_9 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9974489808082581 --- # pond_image_classification_9 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Algae ![Algae](images/Algae.png) #### Boiling ![Boiling](images/Boiling.png) #### BoilingNight ![BoilingNight](images/BoilingNight.png) #### Normal ![Normal](images/Normal.png) #### NormalCement ![NormalCement](images/NormalCement.png) #### NormalNight ![NormalNight](images/NormalNight.png) #### NormalRain ![NormalRain](images/NormalRain.png)
Go2Heart/BERT_Mod_3
Go2Heart
2022-07-29T09:11:43Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-29T07:36:44Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: BERT_Mod_3 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.8198675496688742 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_Mod_3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6760 - Accuracy: 0.8199 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5167 | 1.0 | 24544 | 0.4953 | 0.8077 | | 0.414 | 2.0 | 49088 | 0.4802 | 0.8148 | | 0.2933 | 3.0 | 73632 | 0.5783 | 0.8186 | | 0.2236 | 4.0 | 98176 | 0.6760 | 0.8199 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.4.0 - Tokenizers 0.12.1
RRajesh27/finetuning-sentiment-model-3000-samples
RRajesh27
2022-07-29T08:51:28Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-29T08:39:10Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8666666666666667 - name: F1 type: f1 value: 0.8666666666666667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3236 - Accuracy: 0.8667 - F1: 0.8667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
tosin/dialogpt_afriwoz_pidgin
tosin
2022-07-29T08:41:47Z
18
2
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "en", "dataset:AfriWOZ", "arxiv:2204.08083", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-29T08:00:24Z
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png language: - en license: cc-by-4.0 tags: - conversational - transformers datasets: - AfriWOZ metrics: - perplexity widget: - text: "How I fit chop for here?" --- ## DialoGPT_AfriWOZ (Pidgin) This is a fine-tuned model of DialoGPT (small) on the AfriWOZ dataset. It is intended to be used as a conversational system in Nigeria Pidgin English language. The dataset it's trained on is limited in scope, as it covers only certain domains such as restaurants, hotel, taxi, and booking. The perplexity achieved on the validation set is 38.52. * Generation example from an interactive environment: |Role | Response | |------|------------| |User | I hear say restaurant dey here. | |Bot | I dey find restaurant. | |User | Abeg you fit tell me which kind chop dey? | |Bot | I no dey sure right now. | |User | You do well. Thank you. | |Bot | I dey find restauarant. | Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT) The paper for this work can be found on arXiv: [https://arxiv.org/pdf/2204.08083.pdf](https://arxiv.org/pdf/2204.08083.pdf) ### How to use Now we are ready to try out how the model works as a chatting partner! ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("tosin/dialogpt_afriwoz_pidgin") model = AutoModelForCausalLM.from_pretrained("tosin/dialogpt_afriwoz_pidgin") # Let's chat for 5 lines for step in range(5): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id) # pretty print last ouput tokens from bot print("DialoGPT_pidgin_Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
commanderstrife/distilBERT_bio_pv_superset
commanderstrife
2022-07-29T08:36:40Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-29T05:41:49Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilBERT_bio_pv_superset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBERT_bio_pv_superset This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2328 - Precision: 0.5462 - Recall: 0.5325 - F1: 0.5393 - Accuracy: 0.9495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0964 | 1.0 | 5467 | 0.1593 | 0.4625 | 0.3682 | 0.4100 | 0.9416 | | 0.1918 | 2.0 | 10934 | 0.1541 | 0.4796 | 0.4658 | 0.4726 | 0.9436 | | 0.0394 | 3.0 | 16401 | 0.1508 | 0.5349 | 0.4744 | 0.5028 | 0.9482 | | 0.1207 | 4.0 | 21868 | 0.1615 | 0.5422 | 0.4953 | 0.5177 | 0.9490 | | 0.0221 | 5.0 | 27335 | 0.1827 | 0.5377 | 0.5018 | 0.5191 | 0.9487 | | 0.0629 | 6.0 | 32802 | 0.1874 | 0.5479 | 0.5130 | 0.5299 | 0.9493 | | 0.0173 | 7.0 | 38269 | 0.2025 | 0.5388 | 0.5323 | 0.5356 | 0.9488 | | 0.2603 | 8.0 | 43736 | 0.2148 | 0.5437 | 0.5397 | 0.5417 | 0.9493 | | 0.0378 | 9.0 | 49203 | 0.2323 | 0.5430 | 0.5194 | 0.5310 | 0.9489 | | 0.031 | 10.0 | 54670 | 0.2328 | 0.5462 | 0.5325 | 0.5393 | 0.9495 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
SummerChiam/pond_image_classification_7
SummerChiam
2022-07-29T08:32:46Z
48
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-29T08:32:27Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: pond_image_classification_7 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9936224222183228 --- # pond_image_classification_7 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Algae ![Algae](images/Algae.png) #### Boiling ![Boiling](images/Boiling.png) #### BoilingNight ![BoilingNight](images/BoilingNight.png) #### Normal ![Normal](images/Normal.png) #### NormalCement ![NormalCement](images/NormalCement.png) #### NormalNight ![NormalNight](images/NormalNight.png) #### NormalRain ![NormalRain](images/NormalRain.png)
Frikallo/out
Frikallo
2022-07-29T08:29:57Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-29T08:00:19Z
--- license: mit tags: - generated_from_trainer model-index: - name: out results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # out This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001372 - train_batch_size: 1 - eval_batch_size: 8 - seed: 2370848220 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
ParkSaeroyi/distilroberta-base-finetuned-wikitext2
ParkSaeroyi
2022-07-29T08:10:16Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T10:00:51Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 8.3687 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 6 | 8.8622 | | No log | 2.0 | 12 | 8.4576 | | No log | 3.0 | 18 | 8.4412 | ### Framework versions - Transformers 4.22.0.dev0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
pkufool/icefall_librispeech_streaming_pruned_transducer_stateless5_20220729
pkufool
2022-07-29T08:08:41Z
0
0
null
[ "tensorboard", "license:apache-2.0", "region:us" ]
null
2022-07-29T07:42:03Z
--- license: apache-2.0 --- See https://github.com/k2-fsa/icefall/pull/454 ### training command: ```bash ./pruned_transducer_stateless5/train.py \ --exp-dir pruned_transducer_stateless5/exp \ --num-encoder-layers 18 \ --dim-feedforward 2048 \ --nhead 8 \ --encoder-dim 512 \ --decoder-dim 512 \ --joiner-dim 512 \ --full-libri 1 \ --dynamic-chunk-training 1 \ --causal-convolution 1 \ --short-chunk-size 20 \ --num-left-chunks 4 \ --max-duration 300 \ --world-size 4 \ --start-epoch 1 \ --num-epochs 25 ``` You can find the tensorboard log here <https://tensorboard.dev/experiment/rO04h6vjTLyw0qSxjp4m4Q> ### The decoding command is: ```bash decoding_method="greedy_search" # "fast_beam_search", "modified_beam_search" for chunk in 2 4 8 16; do for left in 32 64; do ./pruned_transducer_stateless5/decode.py \ --num-encoder-layers 18 \ --dim-feedforward 2048 \ --nhead 8 \ --encoder-dim 512 \ --decoder-dim 512 \ --joiner-dim 512 \ --simulate-streaming 1 \ --decode-chunk-size ${chunk} \ --left-context ${left} \ --causal-convolution 1 \ --epoch 25 \ --avg 5 \ --exp-dir ./pruned_transducer_stateless5/exp \ --max-sym-per-frame 1 \ --max-duration 1000 \ --decoding-method ${decoding_method} done done ``` ### export command is: ```bash ./pruned_transducer_stateless5/export.py \ --streaming-model 1 \ --causal-convolution 1 \ --num-encoder-layers 18 \ --dim-feedforward 2048 \ --nhead 8 \ --encoder-dim 512 \ --decoder-dim 512 \ --joiner-dim 512 \ --epoch 25 \ --avg 5 \ --exp-dir ./pruned_transducer_stateless5/exp ./pruned_transducer_stateless5/export.py \ --streaming-model 1 \ --causal-convolution 1 \ --num-encoder-layers 18 \ --dim-feedforward 2048 \ --nhead 8 \ --encoder-dim 512 \ --decoder-dim 512 \ --joiner-dim 512 \ --epoch 25 \ --avg 5 \ --exp-dir ./pruned_transducer_stateless5/exp \ --jit 1 ```
ilmariky/bert-base-finnish-cased-squad2-fi
ilmariky
2022-07-29T07:54:28Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "fi", "license:gpl-3.0", "endpoints_compatible", "region:us" ]
question-answering
2022-07-12T18:27:12Z
--- language: fi datasets: - SQuAD_v2_fi + Finnish partition of TyDi-QA license: gpl-3.0 --- # bert-base-finnish-cased-v1 for QA This is the [bert-base-finnish-cased-v1](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model, fine-tuned using an automatically translated [Finnish version of the SQuAD2.0 dataset](https://huggingface.co/datasets/ilmariky/SQuAD_v2_fi) in combination with the Finnish partition of the [TyDi-QA](https://github.com/google-research-datasets/tydiqa) dataset. It's been trained on question-answer pairs, **including unanswerable questions**, for the task of question answering. When the model classifies the question as unanswerable, it outputs "[CLS]". There is also a QA model available that does not try to identify unanswerable questions, [ bert-base-finnish-cased-squad1-fi ](https://huggingface.co/ilmariky/bert-base-finnish-cased-squad1-fi). ## Overview **Language model:** bert-base-finnish-cased-v1 **Language:** Finnish **Downstream-task:** Extractive QA **Training data:** [Finnish SQuAD 2.0](https://huggingface.co/datasets/ilmariky/SQuAD_v2_fi) + Finnish partition of TyDi-QA **Eval data:** [Finnish SQuAD 2.0](https://huggingface.co/datasets/ilmariky/SQuAD_v2_fi) + Finnish partition of TyDi-QA ## Usage ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "ilmariky/bert-base-finnish-cased-squad2-fi" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Mikä tämä on?', 'context': 'Tämä on testi.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance Evaluated with a slightly modified version of the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` { "exact": 55.53157042633567, "f1": 61.869335312255835, "total": 7412, "HasAns_exact": 51.26503525508088, "HasAns_f1": 61.006950090095565, "HasAns_total": 4822, "NoAns_exact": 63.47490347490348, "NoAns_f1": 63.47490347490348, "NoAns_total": 2590 } ```
chintagunta85/test_ner3
chintagunta85
2022-07-29T04:40:30Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:pv_dataset", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-29T02:46:44Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - pv_dataset metrics: - precision - recall - f1 - accuracy model-index: - name: test_ner3 results: - task: name: Token Classification type: token-classification dataset: name: pv_dataset type: pv_dataset config: PVDatasetCorpus split: train args: PVDatasetCorpus metrics: - name: Precision type: precision value: 0.6698151950718686 - name: Recall type: recall value: 0.6499117663801446 - name: F1 type: f1 value: 0.6597133941985438 - name: Accuracy type: accuracy value: 0.9606609586670052 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_ner3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the pv_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.2983 - Precision: 0.6698 - Recall: 0.6499 - F1: 0.6597 - Accuracy: 0.9607 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1106 | 1.0 | 1813 | 0.1128 | 0.6050 | 0.5949 | 0.5999 | 0.9565 | | 0.0705 | 2.0 | 3626 | 0.1190 | 0.6279 | 0.6122 | 0.6200 | 0.9585 | | 0.0433 | 3.0 | 5439 | 0.1458 | 0.6342 | 0.5983 | 0.6157 | 0.9574 | | 0.0301 | 4.0 | 7252 | 0.1453 | 0.6305 | 0.6818 | 0.6552 | 0.9594 | | 0.0196 | 5.0 | 9065 | 0.1672 | 0.6358 | 0.6871 | 0.6605 | 0.9594 | | 0.0133 | 6.0 | 10878 | 0.1931 | 0.6427 | 0.6138 | 0.6279 | 0.9587 | | 0.0104 | 7.0 | 12691 | 0.1948 | 0.6657 | 0.6511 | 0.6583 | 0.9607 | | 0.0081 | 8.0 | 14504 | 0.2243 | 0.6341 | 0.6574 | 0.6455 | 0.9586 | | 0.0054 | 9.0 | 16317 | 0.2432 | 0.6547 | 0.6318 | 0.6431 | 0.9588 | | 0.0041 | 10.0 | 18130 | 0.2422 | 0.6717 | 0.6397 | 0.6553 | 0.9605 | | 0.0041 | 11.0 | 19943 | 0.2415 | 0.6571 | 0.6420 | 0.6495 | 0.9601 | | 0.0027 | 12.0 | 21756 | 0.2567 | 0.6560 | 0.6590 | 0.6575 | 0.9601 | | 0.0023 | 13.0 | 23569 | 0.2609 | 0.6640 | 0.6495 | 0.6566 | 0.9606 | | 0.002 | 14.0 | 25382 | 0.2710 | 0.6542 | 0.6670 | 0.6606 | 0.9598 | | 0.0012 | 15.0 | 27195 | 0.2766 | 0.6692 | 0.6539 | 0.6615 | 0.9610 | | 0.001 | 16.0 | 29008 | 0.2938 | 0.6692 | 0.6415 | 0.6551 | 0.9603 | | 0.0007 | 17.0 | 30821 | 0.2969 | 0.6654 | 0.6490 | 0.6571 | 0.9604 | | 0.0007 | 18.0 | 32634 | 0.3035 | 0.6628 | 0.6456 | 0.6541 | 0.9601 | | 0.0007 | 19.0 | 34447 | 0.2947 | 0.6730 | 0.6489 | 0.6607 | 0.9609 | | 0.0004 | 20.0 | 36260 | 0.2983 | 0.6698 | 0.6499 | 0.6597 | 0.9607 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
wpolatkan/ppo-LunarLander-v2
wpolatkan
2022-07-29T04:37:44Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-29T04:34:31Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 244.25 +/- 15.32 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
commanderstrife/ADE-Bio_ClinicalBERT-NER
commanderstrife
2022-07-29T01:39:43Z
213
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-29T01:24:29Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: ADE-Bio_ClinicalBERT-NER results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ADE-Bio_ClinicalBERT-NER This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1926 - Precision: 0.7830 - Recall: 0.8811 - F1: 0.8291 - Accuracy: 0.9437 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2389 | 1.0 | 201 | 0.2100 | 0.7155 | 0.8292 | 0.7681 | 0.9263 | | 0.0648 | 2.0 | 402 | 0.1849 | 0.7716 | 0.8711 | 0.8183 | 0.9392 | | 0.2825 | 3.0 | 603 | 0.1856 | 0.7834 | 0.8788 | 0.8284 | 0.9422 | | 0.199 | 4.0 | 804 | 0.1875 | 0.7796 | 0.8781 | 0.8259 | 0.9430 | | 0.0404 | 5.0 | 1005 | 0.1926 | 0.7830 | 0.8811 | 0.8291 | 0.9437 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
wmFrank/sample-factory-2-atari-breakout
wmFrank
2022-07-28T23:31:06Z
2
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-28T23:10:36Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - metrics: - type: mean_reward value: 30.20 +/- 23.45 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_breakout type: atari_breakout --- A(n) **APPO** model trained on the **atari_breakout** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
kabelomalapane/Zu-En_update
kabelomalapane
2022-07-28T23:10:22Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-07-28T20:40:35Z
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: Zu-En_update results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Zu-En_update This model is a fine-tuned version of [kabelomalapane/model_zu-en_updated](https://huggingface.co/kabelomalapane/model_zu-en_updated) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9399 - Bleu: 27.9608 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 2.1017 | 1.0 | 1173 | 1.8404 | 29.1031 | | 1.7497 | 2.0 | 2346 | 1.8318 | 28.9036 | | 1.523 | 3.0 | 3519 | 1.8250 | 28.8415 | | 1.364 | 4.0 | 4692 | 1.8551 | 28.6215 | | 1.2462 | 5.0 | 5865 | 1.8684 | 28.3783 | | 1.1515 | 6.0 | 7038 | 1.8948 | 28.3372 | | 1.0796 | 7.0 | 8211 | 1.9109 | 28.1603 | | 1.0215 | 8.0 | 9384 | 1.9274 | 28.0309 | | 0.9916 | 9.0 | 10557 | 1.9323 | 27.9472 | | 0.9583 | 10.0 | 11730 | 1.9399 | 27.9260 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
maesneako/ES_corlec_DeepESP-gpt2-spanish
maesneako
2022-07-28T22:04:11Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-28T12:58:13Z
--- license: mit tags: - generated_from_trainer model-index: - name: ES_corlec_DeepESP-gpt2-spanish results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ES_corlec_DeepESP-gpt2-spanish This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.0360 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 4.2471 | 0.4 | 2000 | 4.2111 | | 4.1503 | 0.79 | 4000 | 4.1438 | | 4.0749 | 1.19 | 6000 | 4.1077 | | 4.024 | 1.59 | 8000 | 4.0857 | | 3.9855 | 1.98 | 10000 | 4.0707 | | 3.9465 | 2.38 | 12000 | 4.0605 | | 3.9277 | 2.78 | 14000 | 4.0533 | | 3.9159 | 3.17 | 16000 | 4.0482 | | 3.8918 | 3.57 | 18000 | 4.0448 | | 3.8789 | 3.97 | 20000 | 4.0421 | | 3.8589 | 4.36 | 22000 | 4.0402 | | 3.8554 | 4.76 | 24000 | 4.0387 | | 3.8509 | 5.15 | 26000 | 4.0377 | | 3.8389 | 5.55 | 28000 | 4.0370 | | 3.8288 | 5.95 | 30000 | 4.0365 | | 3.8293 | 6.34 | 32000 | 4.0362 | | 3.8202 | 6.74 | 34000 | 4.0360 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.1+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
domenicrosati/deberta-v3-large-finetuned-synthetic-paraphrase-only
domenicrosati
2022-07-28T21:38:33Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-27T13:31:37Z
--- license: mit tags: - text-classification - generated_from_trainer metrics: - f1 - precision - recall model-index: - name: deberta-v3-large-finetuned-synthetic-paraphrase-only results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large-finetuned-synthetic-paraphrase-only This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0120 - F1: 0.9768 - Precision: 0.9961 - Recall: 0.9583 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:------:|:---------:|:------:| | 0.0086 | 1.0 | 10205 | 0.0114 | 0.9642 | 0.9846 | 0.9446 | | 0.0059 | 2.0 | 20410 | 0.0143 | 0.9658 | 0.9961 | 0.9373 | | 0.0 | 3.0 | 30615 | 0.0141 | 0.9716 | 0.9961 | 0.9483 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Evelyn18/roberta-base-spanish-squades-becasIncentivos6
Evelyn18
2022-07-28T21:38:04Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:becasv2", "endpoints_compatible", "region:us" ]
question-answering
2022-07-28T21:08:34Z
--- tags: - generated_from_trainer datasets: - becasv2 model-index: - name: roberta-base-spanish-squades-becasIncentivos6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-spanish-squades-becasIncentivos6 This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 2.0023 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 3 | 2.2257 | | No log | 2.0 | 6 | 1.8301 | | No log | 3.0 | 9 | 1.7627 | | No log | 4.0 | 12 | 1.8773 | | No log | 5.0 | 15 | 1.9731 | | No log | 6.0 | 18 | 2.0023 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
carblacac/xlm-roberta-base-finetuned-panx-de
carblacac
2022-07-28T18:47:01Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-28T18:02:50Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme model-index: - name: xlm-roberta-base-finetuned-panx-de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
amirthaa/dspa
amirthaa
2022-07-28T17:18:48Z
3
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-28T17:18:27Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: dspa results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dspa This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6069 - Validation Loss: 0.6854 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 142110, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.8363 | 0.6965 | 0 | | 0.6069 | 0.6854 | 1 | ### Framework versions - Transformers 4.21.0 - TensorFlow 2.9.1 - Datasets 2.4.0 - Tokenizers 0.12.1
Billwzl/20split_dataset_version3
Billwzl
2022-07-28T16:20:35Z
6
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-27T11:21:44Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: 20split_dataset_version3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20split_dataset_version3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.1679 | 1.0 | 313 | 2.9768 | | 2.9869 | 2.0 | 626 | 2.9299 | | 2.8528 | 3.0 | 939 | 2.9176 | | 2.7435 | 4.0 | 1252 | 2.9104 | | 2.6458 | 5.0 | 1565 | 2.8863 | | 2.5865 | 6.0 | 1878 | 2.8669 | | 2.5218 | 7.0 | 2191 | 2.8802 | | 2.4647 | 8.0 | 2504 | 2.8639 | | 2.3933 | 9.0 | 2817 | 2.8543 | | 2.3687 | 10.0 | 3130 | 2.8573 | | 2.3221 | 11.0 | 3443 | 2.8398 | | 2.276 | 12.0 | 3756 | 2.8415 | | 2.2379 | 13.0 | 4069 | 2.8471 | | 2.2427 | 14.0 | 4382 | 2.8318 | | 2.1741 | 15.0 | 4695 | 2.8356 | | 2.1652 | 16.0 | 5008 | 2.8310 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Atharvgarg/bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news
Atharvgarg
2022-07-28T15:22:19Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "summarisation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-28T14:37:18Z
--- license: apache-2.0 tags: - summarisation - generated_from_trainer metrics: - rouge model-index: - name: bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news This model is a fine-tuned version of [mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization](https://huggingface.co/mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6835 - Rouge1: 58.9345 - Rouge2: 47.1037 - Rougel: 40.9839 - Rougelsum: 57.6981 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 0.8246 | 1.0 | 223 | 0.7050 | 55.7882 | 42.9793 | 38.4511 | 54.3125 | | 0.6414 | 2.0 | 446 | 0.6834 | 55.149 | 42.664 | 38.3864 | 53.7712 | | 0.5603 | 3.0 | 669 | 0.6815 | 56.9756 | 44.8057 | 39.1377 | 55.5815 | | 0.5079 | 4.0 | 892 | 0.6749 | 57.7397 | 45.6267 | 40.0509 | 56.3886 | | 0.4622 | 5.0 | 1115 | 0.6781 | 58.07 | 45.9102 | 40.2704 | 56.7008 | | 0.4263 | 6.0 | 1338 | 0.6798 | 58.1215 | 45.976 | 40.256 | 56.8203 | | 0.399 | 7.0 | 1561 | 0.6798 | 58.5486 | 46.6901 | 40.8045 | 57.2947 | | 0.3815 | 8.0 | 1784 | 0.6835 | 58.9345 | 47.1037 | 40.9839 | 57.6981 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Dugerij/Reinforce-pixelcopter
Dugerij
2022-07-28T14:45:45Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-07-28T14:45:39Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcopter results: - metrics: - type: mean_reward value: 17.00 +/- 12.95 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
AlexKolosov/my_first_model
AlexKolosov
2022-07-28T14:14:33Z
16
0
transformers
[ "transformers", "pytorch", "resnet", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-28T12:48:11Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: my_first_model results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.6 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_first_model This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6853 - Accuracy: 0.6 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6918 | 1.0 | 23 | 0.6895 | 0.8 | | 0.7019 | 2.0 | 46 | 0.6859 | 0.6 | | 0.69 | 3.0 | 69 | 0.6853 | 0.6 | ### Framework versions - Transformers 4.22.0.dev0 - Pytorch 1.12.0+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
Nekoo/P0ken_picture
Nekoo
2022-07-28T13:33:38Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2022-07-28T13:33:38Z
--- license: bigscience-bloom-rail-1.0 ---
Perselope/thesis-audio-1
Perselope
2022-07-28T13:27:40Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-26T22:02:31Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: thesis-audio-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # thesis-audio-1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4268 - Wer: 0.3395 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4633 | 4.0 | 500 | 1.4892 | 1.0006 | | 0.5377 | 8.0 | 1000 | 0.4046 | 0.4163 | | 0.1818 | 12.0 | 1500 | 0.4255 | 0.3850 | | 0.1024 | 16.0 | 2000 | 0.4574 | 0.3644 | | 0.0723 | 20.0 | 2500 | 0.4412 | 0.3550 | | 0.0542 | 24.0 | 3000 | 0.4095 | 0.3404 | | 0.0434 | 28.0 | 3500 | 0.4268 | 0.3395 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
Dugerij/Reinforce-cartpoleModel
Dugerij
2022-07-28T13:25:26Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-07-28T13:25:18Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpoleModel results: - metrics: - type: mean_reward value: 49.30 +/- 10.99 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
kabelomalapane/En-Zu_update
kabelomalapane
2022-07-28T13:24:27Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-07-28T10:55:08Z
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: En-Zu_update results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # En-Zu_update This model is a fine-tuned version of [kabelomalapane/test_model1.2_updated](https://huggingface.co/kabelomalapane/test_model1.2_updated) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7101 - Bleu: 11.8551 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 1.9111 | 1.0 | 1173 | 1.7594 | 11.7012 | | 1.7191 | 2.0 | 2346 | 1.7279 | 12.0250 | | 1.5709 | 3.0 | 3519 | 1.7172 | 10.6222 | | 1.4924 | 4.0 | 4692 | 1.7042 | 11.4224 | | 1.4188 | 5.0 | 5865 | 1.7051 | 11.4330 | | 1.3566 | 6.0 | 7038 | 1.6972 | 11.5300 | | 1.3141 | 7.0 | 8211 | 1.7041 | 11.4339 | | 1.2641 | 8.0 | 9384 | 1.7064 | 11.4030 | | 1.2437 | 9.0 | 10557 | 1.7079 | 11.4014 | | 1.2333 | 10.0 | 11730 | 1.7101 | 11.5164 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
ivan-savchuk/msmarco-distilbert-dot-v5-tuned-full-v1
ivan-savchuk
2022-07-28T12:14:51Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-07-28T11:47:03Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 3165 with parameters: ``` {'batch_size': 16} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 316, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
maesneako/ES_corlec
maesneako
2022-07-28T11:10:09Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-21T09:59:37Z
--- license: mit tags: - generated_from_trainer model-index: - name: ES_corlec results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ES_corlec This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.1+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
amartyobanerjee/distilbert-base-uncased-whole-word-word-ids-finetuned-imdb
amartyobanerjee
2022-07-28T10:01:48Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T09:53:39Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-whole-word-word-ids-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-whole-word-word-ids-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6573 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7261 | 1.0 | 157 | 0.6532 | | 0.6766 | 2.0 | 314 | 0.6514 | | 0.6677 | 3.0 | 471 | 0.6555 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
amartyobanerjee/distilbert-base-uncased-finetuned-imdb
amartyobanerjee
2022-07-28T09:45:35Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-28T05:27:01Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4898 | | 2.5796 | 2.0 | 314 | 2.4230 | | 2.5269 | 3.0 | 471 | 2.4354 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AlbertShu/Reinforce-v0
AlbertShu
2022-07-28T09:22:30Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-07-28T09:22:20Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-v0 results: - metrics: - type: mean_reward value: 99.30 +/- 29.54 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
jaeyeon/korean-aihub-learning-math-16batch
jaeyeon
2022-07-28T08:13:59Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-28T07:10:40Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: korean-aihub-learning-math-16batch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # korean-aihub-learning-math-16batch This model is a fine-tuned version of [kresnik/wav2vec2-large-xlsr-korean](https://huggingface.co/kresnik/wav2vec2-large-xlsr-korean) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1497 - Wer: 0.5260 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 20 | 32.0718 | 1.0 | | No log | 2.0 | 40 | 24.7403 | 1.0808 | | No log | 3.0 | 60 | 5.8389 | 1.0 | | No log | 4.0 | 80 | 4.8543 | 1.0 | | 19.6583 | 5.0 | 100 | 4.4453 | 1.0 | | 19.6583 | 6.0 | 120 | 4.3923 | 1.0 | | 19.6583 | 7.0 | 140 | 4.2902 | 1.0 | | 19.6583 | 8.0 | 160 | 3.9026 | 0.9959 | | 19.6583 | 9.0 | 180 | 3.0616 | 0.9740 | | 3.7358 | 10.0 | 200 | 2.2049 | 0.8534 | | 3.7358 | 11.0 | 220 | 1.6666 | 0.7288 | | 3.7358 | 12.0 | 240 | 1.4123 | 0.6603 | | 3.7358 | 13.0 | 260 | 1.3113 | 0.6164 | | 3.7358 | 14.0 | 280 | 1.2269 | 0.6356 | | 0.8398 | 15.0 | 300 | 1.2349 | 0.5945 | | 0.8398 | 16.0 | 320 | 1.1970 | 0.5658 | | 0.8398 | 17.0 | 340 | 1.2144 | 0.5562 | | 0.8398 | 18.0 | 360 | 1.2551 | 0.5658 | | 0.8398 | 19.0 | 380 | 1.1971 | 0.5493 | | 0.2649 | 20.0 | 400 | 1.1967 | 0.5247 | | 0.2649 | 21.0 | 420 | 1.2796 | 0.5849 | | 0.2649 | 22.0 | 440 | 1.2156 | 0.5521 | | 0.2649 | 23.0 | 460 | 1.2118 | 0.5425 | | 0.2649 | 24.0 | 480 | 1.1637 | 0.5384 | | 0.1801 | 25.0 | 500 | 1.1846 | 0.5562 | | 0.1801 | 26.0 | 520 | 1.1927 | 0.5534 | | 0.1801 | 27.0 | 540 | 1.2015 | 0.5384 | | 0.1801 | 28.0 | 560 | 1.2077 | 0.5397 | | 0.1801 | 29.0 | 580 | 1.1554 | 0.5260 | | 0.1364 | 30.0 | 600 | 1.1497 | 0.5260 | ### Framework versions - Transformers 4.22.0.dev0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
CompVis/ldm-celebahq-256
CompVis
2022-07-28T08:12:07Z
199
42
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "arxiv:2112.10752", "license:apache-2.0", "diffusers:LDMPipeline", "region:us" ]
unconditional-image-generation
2022-07-15T17:28:35Z
--- license: apache-2.0 tags: - pytorch - diffusers - unconditional-image-generation --- # Latent Diffusion Models (LDM) **Paper**: [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) **Abstract**: *By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.* **Authors** *Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer* ## Usage ### Inference with a pipeline ```python !pip install diffusers from diffusers import DiffusionPipeline model_id = "CompVis/ldm-celebahq-256" # load model and scheduler pipeline = DiffusionPipeline.from_pretrained(model_id) # run pipeline in inference (sample random noise and denoise) image = pipeline(num_inference_steps=200)["sample"] # save image image[0].save("ldm_generated_image.png") ``` ### Inference with an unrolled loop ```python !pip install diffusers from diffusers import UNet2DModel, DDIMScheduler, VQModel import torch import PIL.Image import numpy as np import tqdm seed = 3 # load all models unet = UNet2DModel.from_pretrained("CompVis/ldm-celebahq-256", subfolder="unet") vqvae = VQModel.from_pretrained("CompVis/ldm-celebahq-256", subfolder="vqvae") scheduler = DDIMScheduler.from_config("CompVis/ldm-celebahq-256", subfolder="scheduler") # set to cuda torch_device = "cuda" if torch.cuda.is_available() else "cpu" unet.to(torch_device) vqvae.to(torch_device) # generate gaussian noise to be decoded generator = torch.manual_seed(seed) noise = torch.randn( (1, unet.in_channels, unet.sample_size, unet.sample_size), generator=generator, ).to(torch_device) # set inference steps for DDIM scheduler.set_timesteps(num_inference_steps=200) image = noise for t in tqdm.tqdm(scheduler.timesteps): # predict noise residual of previous image with torch.no_grad(): residual = unet(image, t)["sample"] # compute previous image x_t according to DDIM formula prev_image = scheduler.step(residual, t, image, eta=0.0)["prev_sample"] # x_t-1 -> x_t image = prev_image # decode image with vae with torch.no_grad(): image = vqvae.decode(image) # process image image_processed = image.cpu().permute(0, 2, 3, 1) image_processed = (image_processed + 1.0) * 127.5 image_processed = image_processed.clamp(0, 255).numpy().astype(np.uint8) image_pil = PIL.Image.fromarray(image_processed[0]) image_pil.save(f"generated_image_{seed}.png") ``` ## Samples 1. ![sample_0](https://huggingface.co/CompVis/latent-diffusion-celeba-256/resolve/main/images/generated_image_0.png) 2. ![sample_1](https://huggingface.co/CompVis/latent-diffusion-celeba-256/resolve/main/images/generated_image_1.png) 3. ![sample_2](https://huggingface.co/CompVis/latent-diffusion-celeba-256/resolve/main/images/generated_image_2.png) 4. ![sample_3](https://huggingface.co/CompVis/latent-diffusion-celeba-256/resolve/main/images/generated_image_3.png)
pkufool/icefall-asr-librispeech-pruned-stateless-streaming-conformer-rnnt4-2022-06-10
pkufool
2022-07-28T08:00:20Z
0
1
null
[ "tensorboard", "license:apache-2.0", "region:us" ]
null
2022-06-09T22:50:20Z
--- license: apache-2.0 --- The pretrained model (pruned_transducer_stateless4) in https://github.com/k2-fsa/icefall/pull/380 ### training ``` #!/usr/bin/env bash set -x K2_ROOT=/path/to/k2 ICEFALL=/path/to/icefall export PYTHONPATH=$K2_ROOT/k2/python:$PYTHONPATH export PYTHONPATH=$K2_ROOT/build/lib:$PYTHONPATH export PYTHONPATH=$ICEFALL:$PYTHONPATH export CUDA_VISIBLE_DEVICES="0,1,2,3" ./pruned_transducer_stateless4/train.py \ --exp-dir pruned_transducer_stateless4/exp \ --full-libri 1 \ --dynamic-chunk-training 1 \ --short-chunk-size 32 \ --num-left-chunks 4 \ --causal-convolution 1 \ --max-duration 300 \ --world-size 4 \ --start-epoch 1 \ --num-epochs 30 ``` ### decoding #### simulate streaming ``` #!/usr/bin/env bash set -x K2_ROOT=/path/to/k2 ICEFALL=/path/to/icefall export PYTHONPATH=$K2_ROOT/k2/python:$PYTHONPATH export PYTHONPATH=$K2_ROOT/build/lib:$PYTHONPATH export PYTHONPATH=$ICEFALL:$PYTHONPATH export CUDA_VISIBLE_DEVICES="0" for size in 1 2 4 8 16 32; do for left in 32 64 -1; do ./pruned_transducer_stateless4/decode.py \ --simulate-streaming 1 \ --decode-chunk-size ${size} \ --left-context ${left} \ --causal-convolution 1 \ --use-averaged-model 1 \ --epoch 29 \ --avg 6 \ --exp-dir ./pruned_transducer_stateless4/exp \ --max-sym-per-frame 1 \ --max-duration 1000 \ --decoding-method greedy_search done done ``` #### streaming ``` #!/usr/bin/env bash set -x K2_ROOT=/path/to/k2 ICEFALL=/path/to/icefall export PYTHONPATH=$K2_ROOT/k2/python:$PYTHONPATH export PYTHONPATH=$K2_ROOT/build/lib:$PYTHONPATH export PYTHONPATH=$ICEFALL:$PYTHONPATH export CUDA_VISIBLE_DEVICES="0" #left_context=32 #chunk_size=8 left_context=64 chunk_size=16 for right in 0 2 4 8; do ./pruned_transducer_stateless4/streaming_decode.py \ --left-context ${left_context} \ --decode-chunk-size ${chunk_size} \ --right-context ${right} \ --exp-dir ./pruned_transducer_stateless4/exp \ --use-averaged-model 1 \ --epoch 29 \ --avg 6 \ --num-decode-streams 1000 done ``` ### export for pretrained.pt ``` python pruned_transducer_stateless4/export.py \ --exp-dir ./pruned_transducer_stateless4/exp \ --epoch 29 \ --avg 6 \ --streaming-model 1 \ --causal-convolution 1 ``` for cpu_jit.pt ``` python pruned_transducer_stateless4/export.py \ --exp-dir ./pruned_transducer_stateless4/exp \ --epoch 29 \ --avg 6 \ --streaming-model 1 \ --causal-convolution 1 \ --jit 1 ```
SharpAI/mal-tls-bert-base-w1q8
SharpAI
2022-07-28T07:05:48Z
4
0
transformers
[ "transformers", "pytorch", "tf", "bert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-28T07:03:33Z
--- tags: - generated_from_keras_callback model-index: - name: mal_tls-bert-base-w1q8 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mal_tls-bert-base-w1q8 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.15.0 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.10.3
jaeyeon/korean-aihub-learning-math-8batch
jaeyeon
2022-07-28T06:51:16Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-28T05:48:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: korean-aihub-learning-math-8batch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # korean-aihub-learning-math-8batch This model is a fine-tuned version of [kresnik/wav2vec2-large-xlsr-korean](https://huggingface.co/kresnik/wav2vec2-large-xlsr-korean) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1867 - Wer: 0.5315 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 20 | 33.1529 | 1.0 | | No log | 2.0 | 40 | 28.0161 | 1.0 | | No log | 3.0 | 60 | 8.7324 | 1.0 | | No log | 4.0 | 80 | 4.9786 | 1.0 | | 21.6269 | 5.0 | 100 | 4.5335 | 1.0 | | 21.6269 | 6.0 | 120 | 4.4517 | 1.0 | | 21.6269 | 7.0 | 140 | 4.4068 | 1.0 | | 21.6269 | 8.0 | 160 | 4.3210 | 1.0 | | 21.6269 | 9.0 | 180 | 4.0041 | 0.9932 | | 4.1788 | 10.0 | 200 | 3.0921 | 0.9712 | | 4.1788 | 11.0 | 220 | 2.1650 | 0.8603 | | 4.1788 | 12.0 | 240 | 1.6135 | 0.7192 | | 4.1788 | 13.0 | 260 | 1.3842 | 0.6466 | | 4.1788 | 14.0 | 280 | 1.2872 | 0.5918 | | 1.205 | 15.0 | 300 | 1.2234 | 0.5808 | | 1.205 | 16.0 | 320 | 1.2694 | 0.6 | | 1.205 | 17.0 | 340 | 1.2287 | 0.5575 | | 1.205 | 18.0 | 360 | 1.1776 | 0.5877 | | 1.205 | 19.0 | 380 | 1.2418 | 0.5671 | | 0.2825 | 20.0 | 400 | 1.2469 | 0.5616 | | 0.2825 | 21.0 | 420 | 1.2203 | 0.5425 | | 0.2825 | 22.0 | 440 | 1.2270 | 0.5863 | | 0.2825 | 23.0 | 460 | 1.1930 | 0.5548 | | 0.2825 | 24.0 | 480 | 1.1242 | 0.5521 | | 0.1831 | 25.0 | 500 | 1.2245 | 0.5575 | | 0.1831 | 26.0 | 520 | 1.2276 | 0.5342 | | 0.1831 | 27.0 | 540 | 1.1641 | 0.5205 | | 0.1831 | 28.0 | 560 | 1.1727 | 0.5329 | | 0.1831 | 29.0 | 580 | 1.1885 | 0.5534 | | 0.14 | 30.0 | 600 | 1.1867 | 0.5315 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
tuner007/pegasus_summarizer
tuner007
2022-07-28T06:38:07Z
793
43
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "seq2seq", "summarization", "en", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 tags: - pegasus - seq2seq - summarization model-index: - name: tuner007/pegasus_summarizer results: - task: type: summarization name: Summarization dataset: name: cnn_dailymail type: cnn_dailymail config: 3.0.0 split: train metrics: - name: ROUGE-1 type: rouge value: 36.604 verified: true - name: ROUGE-2 type: rouge value: 14.6398 verified: true - name: ROUGE-L type: rouge value: 23.8845 verified: true - name: ROUGE-LSUM type: rouge value: 32.9017 verified: true - name: loss type: loss value: 2.5757133960723877 verified: true - name: gen_len type: gen_len value: 76.3984 verified: true --- ## Model description [PEGASUS](https://github.com/google-research/pegasus) fine-tuned for summarization ## Install "sentencepiece" library required for tokenizer ``` pip install sentencepiece ``` ## Model in Action 🚀 ``` import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer model_name = 'tuner007/pegasus_summarizer' torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) def get_response(input_text): batch = tokenizer([input_text],truncation=True,padding='longest',max_length=1024, return_tensors="pt").to(torch_device) gen_out = model.generate(**batch,max_length=128,num_beams=5, num_return_sequences=1, temperature=1.5) output_text = tokenizer.batch_decode(gen_out, skip_special_tokens=True) return output_text ``` #### Example: context = """" India wicket-keeper batsman Rishabh Pant has said someone from the crowd threw a ball on pacer Mohammed Siraj while he was fielding in the ongoing third Test against England on Wednesday. Pant revealed the incident made India skipper Virat Kohli "upset". "I think, somebody threw a ball inside, at Siraj, so he [Kohli] was upset," said Pant in a virtual press conference after the close of the first day\'s play."You can say whatever you want to chant, but don\'t throw things at the fielders and all those things. It is not good for cricket, I guess," he added.In the third session of the opening day of the third Test, a section of spectators seemed to have asked Siraj the score of the match to tease the pacer. The India pacer however came with a brilliant reply as he gestured 1-0 (India leading the Test series) towards the crowd.Earlier this month, during the second Test match, there was some bad crowd behaviour on a show as some unruly fans threw champagne corks at India batsman KL Rahul.Kohli also intervened and he was seen gesturing towards the opening batsman to know more about the incident. An over later, the TV visuals showed that many champagne corks were thrown inside the playing field, and the Indian players were visibly left frustrated.Coming back to the game, after bundling out India for 78, openers Rory Burns and Haseeb Hameed ensured that England took the honours on the opening day of the ongoing third Test.At stumps, England\'s score reads 120/0 and the hosts have extended their lead to 42 runs. For the Three Lions, Burns (52*) and Hameed (60*) are currently unbeaten at the crease.Talking about the pitch on opening day, Pant said, "They took the heavy roller, the wicket was much more settled down, and they batted nicely also," he said. "But when we batted, the wicket was slightly soft, and they bowled in good areas, but we could have applied [ourselves] much better."Both England batsmen managed to see off the final session and the hosts concluded the opening day with all ten wickets intact, extending the lead to 42.(ANI) """ ``` get_response(context) ``` #### Output: Team India wicketkeeper-batsman Rishabh Pant has said that Virat Kohli was "upset" after someone threw a ball on pacer Mohammed Siraj while he was fielding in the ongoing third Test against England. "You can say whatever you want to chant, but don't throw things at the fielders and all those things. It's not good for cricket, I guess," Pant added.' #### [Inshort](https://www.inshorts.com/) (60 words News summary app, rated 4.4 by 5,27,246+ users on android playstore) summary: India wicketkeeper-batsman Rishabh Pant has revealed that captain Virat Kohli was upset with the crowd during the first day of Leeds Test against England because someone threw a ball at pacer Mohammed Siraj. Pant added, "You can say whatever you want to chant, but don't throw things at the fielders and all those things. It is not good for cricket." > Created by [Arpit Rajauria](https://twitter.com/arpit_rajauria) [![Twitter icon](https://cdn0.iconfinder.com/data/icons/shift-logotypes/32/Twitter-32.png)](https://twitter.com/arpit_rajauria)
marifulhaque/wav2vec2-large-xls-r-300m-turkish-colab
marifulhaque
2022-07-28T03:03:45Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-09T15:31:57Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4411 - Wer: 0.3271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.8286 | 3.67 | 400 | 0.6899 | 0.7462 | | 0.4378 | 7.34 | 800 | 0.4803 | 0.5127 | | 0.2073 | 11.01 | 1200 | 0.4640 | 0.4584 | | 0.1386 | 14.68 | 1600 | 0.4355 | 0.4252 | | 0.1058 | 18.35 | 2000 | 0.4476 | 0.3789 | | 0.0819 | 22.02 | 2400 | 0.4248 | 0.3543 | | 0.0666 | 25.69 | 2800 | 0.4276 | 0.3399 | | 0.0525 | 29.36 | 3200 | 0.4411 | 0.3271 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
AykeeSalazar/vc-bantai-vit-withoutAMBI-adunest-v1
AykeeSalazar
2022-07-28T02:45:09Z
54
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-28T01:15:01Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vc-bantai-vit-withoutAMBI-adunest-v1 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder args: Violation-Classification---Raw-6 metrics: - name: Accuracy type: accuracy value: 0.9181222707423581 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vc-bantai-vit-withoutAMBI-adunest-v1 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3318 - Accuracy: 0.9181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.23 | 100 | 0.3365 | 0.8581 | | No log | 0.45 | 200 | 0.3552 | 0.8472 | | No log | 0.68 | 300 | 0.3165 | 0.8581 | | No log | 0.91 | 400 | 0.2882 | 0.8690 | | 0.3813 | 1.13 | 500 | 0.2825 | 0.8745 | | 0.3813 | 1.36 | 600 | 0.2686 | 0.9007 | | 0.3813 | 1.59 | 700 | 0.2381 | 0.9017 | | 0.3813 | 1.81 | 800 | 0.3643 | 0.8734 | | 0.3813 | 2.04 | 900 | 0.2873 | 0.8930 | | 0.2736 | 2.27 | 1000 | 0.2236 | 0.9039 | | 0.2736 | 2.49 | 1100 | 0.2652 | 0.8723 | | 0.2736 | 2.72 | 1200 | 0.2793 | 0.8952 | | 0.2736 | 2.95 | 1300 | 0.2158 | 0.8974 | | 0.2736 | 3.17 | 1400 | 0.2410 | 0.8886 | | 0.2093 | 3.4 | 1500 | 0.2262 | 0.9017 | | 0.2093 | 3.63 | 1600 | 0.2110 | 0.9214 | | 0.2093 | 3.85 | 1700 | 0.2048 | 0.9138 | | 0.2093 | 4.08 | 1800 | 0.2044 | 0.9127 | | 0.2093 | 4.31 | 1900 | 0.2591 | 0.9007 | | 0.1764 | 4.54 | 2000 | 0.2466 | 0.8952 | | 0.1764 | 4.76 | 2100 | 0.2554 | 0.9017 | | 0.1764 | 4.99 | 2200 | 0.2145 | 0.9203 | | 0.1764 | 5.22 | 2300 | 0.3187 | 0.9039 | | 0.1764 | 5.44 | 2400 | 0.3336 | 0.9050 | | 0.1454 | 5.67 | 2500 | 0.2542 | 0.9127 | | 0.1454 | 5.9 | 2600 | 0.2796 | 0.8952 | | 0.1454 | 6.12 | 2700 | 0.2410 | 0.9181 | | 0.1454 | 6.35 | 2800 | 0.2503 | 0.9148 | | 0.1454 | 6.58 | 2900 | 0.2966 | 0.8996 | | 0.1216 | 6.8 | 3000 | 0.1978 | 0.9312 | | 0.1216 | 7.03 | 3100 | 0.2297 | 0.9214 | | 0.1216 | 7.26 | 3200 | 0.2768 | 0.9203 | | 0.1216 | 7.48 | 3300 | 0.3356 | 0.9083 | | 0.1216 | 7.71 | 3400 | 0.3415 | 0.9138 | | 0.1038 | 7.94 | 3500 | 0.2398 | 0.9061 | | 0.1038 | 8.16 | 3600 | 0.3347 | 0.8963 | | 0.1038 | 8.39 | 3700 | 0.2199 | 0.9203 | | 0.1038 | 8.62 | 3800 | 0.2943 | 0.9061 | | 0.1038 | 8.84 | 3900 | 0.2561 | 0.9181 | | 0.0925 | 9.07 | 4000 | 0.4170 | 0.8777 | | 0.0925 | 9.3 | 4100 | 0.3638 | 0.8974 | | 0.0925 | 9.52 | 4200 | 0.3233 | 0.9094 | | 0.0925 | 9.75 | 4300 | 0.3496 | 0.9203 | | 0.0925 | 9.98 | 4400 | 0.3621 | 0.8996 | | 0.0788 | 10.2 | 4500 | 0.3260 | 0.9116 | | 0.0788 | 10.43 | 4600 | 0.3979 | 0.9061 | | 0.0788 | 10.66 | 4700 | 0.3301 | 0.8974 | | 0.0788 | 10.88 | 4800 | 0.2197 | 0.9105 | | 0.0788 | 11.11 | 4900 | 0.3306 | 0.9148 | | 0.0708 | 11.34 | 5000 | 0.3318 | 0.9181 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
jianzhnie/q_FrozenLake_v1_4x4_noSlippery
jianzhnie
2022-07-28T02:20:56Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-27T11:49:50Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q_FrozenLake_v1_4x4_noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # Q-Learning Agent playing FrozenLake-v1 This is a trained model of a **Q-Learning** agent playing FrozenLake-v1. ## Usage ```python model = load_from_hub(repo_id="jianzhnie/q_FrozenLake_v1_4x4_noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
huggingtweets/penguinnnno
huggingtweets
2022-07-28T01:35:06Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-28T01:07:43Z
--- language: en thumbnail: http://www.huggingtweets.com/penguinnnno/1658971968390/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1452082178741968901/oERkhKFL_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">penguino</div> <div style="text-align: center; font-size: 14px;">@penguinnnno</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from penguino. | Data | penguino | | --- | --- | | Tweets downloaded | 1865 | | Retweets | 839 | | Short tweets | 377 | | Tweets kept | 649 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2hb9ovan/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @penguinnnno's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/4k058458) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/4k058458/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/penguinnnno') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AykeeSalazar/vc-bantai-vit-withoutAMBI-adunest-trial
AykeeSalazar
2022-07-28T01:02:09Z
53
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-28T00:29:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vc-bantai-vit-withoutAMBI-adunest-trial results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder args: Violation-Classification---Raw-9 metrics: - name: Accuracy type: accuracy value: 0.7797741273100616 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vc-bantai-vit-withoutAMBI-adunest-trial This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4289 - Accuracy: 0.7798 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.4 | 100 | 1.0782 | 0.4451 | | No log | 0.8 | 200 | 0.5634 | 0.7156 | | No log | 1.2 | 300 | 0.7181 | 0.6684 | | No log | 1.61 | 400 | 0.4289 | 0.7798 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
kabelomalapane/Af-En_update
kabelomalapane
2022-07-27T23:37:19Z
117
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-07-27T20:53:09Z
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: Af-En_update results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Af-En_update This model is a fine-tuned version of [Helsinki-NLP/opus-mt-af-en](https://huggingface.co/Helsinki-NLP/opus-mt-af-en) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7197 - Bleu: 55.3346 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 1.3745 | 1.0 | 2553 | 1.7537 | 51.9270 | | 1.0462 | 2.0 | 5106 | 1.6305 | 53.9359 | | 0.896 | 3.0 | 7659 | 1.6216 | 54.3049 | | 0.7824 | 4.0 | 10212 | 1.6108 | 54.9902 | | 0.6974 | 5.0 | 12765 | 1.6183 | 55.0265 | | 0.643 | 6.0 | 15318 | 1.6207 | 55.4137 | | 0.5635 | 7.0 | 17871 | 1.6276 | 55.1335 | | 0.5141 | 8.0 | 20424 | 1.6498 | 55.2215 | | 0.4681 | 9.0 | 22977 | 1.6678 | 55.2000 | | 0.4304 | 10.0 | 25530 | 1.6797 | 55.2748 | | 0.425 | 11.0 | 28083 | 1.7004 | 55.0478 | | 0.398 | 12.0 | 30636 | 1.7013 | 55.3591 | | 0.3759 | 13.0 | 33189 | 1.7082 | 55.3225 | | 0.3681 | 14.0 | 35742 | 1.7151 | 55.1793 | | 0.3571 | 15.0 | 38295 | 1.7197 | 55.2729 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
akraut/CDS_BERT_CLF
akraut
2022-07-27T23:06:24Z
0
0
keras
[ "keras", "tf-keras", "region:us" ]
null
2022-07-27T23:06:07Z
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | name | learning_rate | decay | beta_1 | beta_2 | epsilon | amsgrad | training_precision | |----|-------------|-----|------|------|-------|-------|------------------| |Adam|0.011362014338374138|0.0|0.8999999761581421|0.9990000128746033|1e-07|False|float32| ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
dbarbedillo/a2c-AntBulletEnv-v0
dbarbedillo
2022-07-27T22:25:58Z
2
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-27T22:24:45Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 1748.24 +/- 84.28 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
OMARS200/primer_modelo_hub
OMARS200
2022-07-27T22:12:35Z
3
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-27T04:03:19Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: OMARS200/primer_modelo_hub results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # OMARS200/primer_modelo_hub This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0892 - Validation Loss: 0.6573 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1565 | 0.6118 | 0 | | 0.0892 | 0.6573 | 1 | ### Framework versions - Transformers 4.21.0 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
ejin/bert-base-cased-finetuned-ner
ejin
2022-07-27T21:16:41Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-26T20:04:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-cased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: train args: conll2003 metrics: - name: Precision type: precision value: 0.8940432730834298 - name: Recall type: recall value: 0.9008612955320294 - name: F1 type: f1 value: 0.8974393350315055 - name: Accuracy type: accuracy value: 0.9749955848590098 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0919 - Precision: 0.8940 - Recall: 0.9009 - F1: 0.8974 - Accuracy: 0.9750 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1147 | 1.0 | 1756 | 0.0919 | 0.8940 | 0.9009 | 0.8974 | 0.9750 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
SharpAI/mal-tls-bert-base
SharpAI
2022-07-27T20:51:25Z
3
0
transformers
[ "transformers", "pytorch", "tf", "bert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-27T19:09:23Z
--- tags: - generated_from_keras_callback model-index: - name: mal_tls-bert-base results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mal_tls-bert-base This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.20.1 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.12.1
ai4bharat/indicwav2vec-hindi
ai4bharat
2022-07-27T20:31:31Z
4,110
16
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "asr", "hi", "arxiv:2006.11477", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-27T19:43:11Z
--- language: hi metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - wav2vec2 - asr license: apache-2.0 --- # IndicWav2Vec-Hindi This is a [Wav2Vec2](https://arxiv.org/abs/2006.11477) style ASR model trained in [fairseq](https://github.com/facebookresearch/fairseq) and ported to Hugging Face. More details on datasets, training-setup and conversion to HuggingFace format can be found in the [IndicWav2Vec](https://github.com/AI4Bharat/IndicWav2Vec) repo. *Note: This model doesn't support inference with Language Model.* ## Script to Run Inference ```python import torch from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torchaudio.functional as F DEVICE_ID = "cuda" if torch.cuda.is_available() else "cpu" MODEL_ID = "ai4bharat/indicwav2vec-hindi" sample = next(iter(load_dataset("common_voice", "hi", split="test", streaming=True))) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48000, 16000).numpy() model = AutoModelForCTC.from_pretrained(MODEL_ID).to(DEVICE_ID) processor = AutoProcessor.from_pretrained(MODEL_ID) input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values.to(DEVICE_ID)).logits.cpu() prediction_ids = torch.argmax(logits, dim=-1) output_str = processor.batch_decode(prediction_ids)[0] print(f"Greedy Decoding: {output_str}") ``` # **About AI4Bharat** - Website: https://ai4bharat.org/ - Code: https://github.com/AI4Bharat - HuggingFace: https://huggingface.co/ai4bharat
unclearsoup/creative
unclearsoup
2022-07-27T20:00:32Z
0
0
null
[ "license:cc-by-4.0", "region:us" ]
null
2022-07-27T19:58:27Z
--- license: cc-by-4.0 --- import requests API_URL = "https://api-inference.huggingface.co/models/bigscience/bloom" headers = {"Authorization": f"Bearer {API_TOKEN}"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json()
xhyi/PT_GPTNEO350_ATG
xhyi
2022-07-27T19:23:11Z
1,631
20
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# GPT NEO 350M This hosts the pulled 350M that Eleuther removed. I am keeping it 😎
kabelomalapane/En-Af_update
kabelomalapane
2022-07-27T18:17:15Z
118
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-07-27T16:11:00Z
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: En-Af_update results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # En-Af_update This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-af](https://huggingface.co/Helsinki-NLP/opus-mt-en-af) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8089 - Bleu: 45.1780 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 1.4243 | 1.0 | 2553 | 1.8451 | 42.1314 | | 1.0987 | 2.0 | 5106 | 1.7509 | 44.0714 | | 0.9329 | 3.0 | 7659 | 1.7340 | 44.6003 | | 0.8365 | 4.0 | 10212 | 1.7260 | 44.7820 | | 0.7556 | 5.0 | 12765 | 1.7590 | 45.1180 | | 0.6944 | 6.0 | 15318 | 1.7715 | 45.1451 | | 0.652 | 7.0 | 17871 | 1.7696 | 45.1025 | | 0.6132 | 8.0 | 20424 | 1.8060 | 45.1781 | | 0.5832 | 9.0 | 22977 | 1.8135 | 45.2485 | | 0.5602 | 10.0 | 25530 | 1.8089 | 45.1730 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
d2niraj555/distilbert-base-uncased-finetuned-emotion
d2niraj555
2022-07-27T17:24:50Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-26T10:43:51Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.9241328800048197 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2133 - Accuracy: 0.924 - F1: 0.9241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8087 | 1.0 | 250 | 0.3067 | 0.905 | 0.9030 | | 0.2439 | 2.0 | 500 | 0.2133 | 0.924 | 0.9241 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
asi/igpt-fr-cased-base
asi
2022-07-27T17:12:36Z
5
4
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "tf", "text-to-image", "fr", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-to-image
2022-07-26T20:57:33Z
--- language: - fr thumbnail: https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/logo.png tags: - tf - pytorch - gpt2 - text-to-image license: apache-2.0 --- <img src="https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/igpt-logo.png" width="400"> ## Model description **iGPT-fr** 🇫🇷 is a GPT model for French pre-trained incremental language model developped by the [Laboratoire de Linguistique Formelle (LLF)](http://www.llf.cnrs.fr/en). We adapted [GPT-fr 🇫🇷](https://huggingface.co/asi/gpt-fr-cased-base) model to generate images conditionned by text inputs. ## Intended uses & limitations The model can be leveraged for image generation tasks. The model is currently under a developpment phase. #### How to use The model might be used through the 🤗 `Transformers` librairie. You will also need to install the `Taming Transformers` library for high-resolution image synthesis: ```bash pip install git+https://github.com/CompVis/taming-transformers.git ``` ```python from transformers import GPT2Tokenizer, GPT2LMHeadModel from huggingface_hub import hf_hub_download from omegaconf import OmegaConf from taming.models import vqgan import torch from PIL import Image import numpy as np # Load VQGAN model vqgan_ckpt = hf_hub_download(repo_id="boris/vqgan_f16_16384", filename="model.ckpt", force_download=False) vqgan_config = hf_hub_download(repo_id="boris/vqgan_f16_16384", filename="config.yaml", force_download=False) config = OmegaConf.load(vqgan_config) vqgan_model = vqgan.VQModel(**config.model.params) vqgan_model.eval().requires_grad_(False) vqgan_model.init_from_ckpt(vqgan_ckpt) # Load pretrained model model = GPT2LMHeadModel.from_pretrained("asi/igpt-fr-cased-base") model.eval() tokenizer = GPT2Tokenizer.from_pretrained("asi/igpt-fr-cased-base") # Generate a sample of text input_sentence = "Une carte de l'europe" input_ids = tokenizer.encode(input_sentence, return_tensors='pt') input_ids = torch.cat((input_ids, torch.tensor([[50000]])), 1) # Add image generation token greedy_output = model.generate( input_ids.to(device), max_length=256+input_ids.shape[1], do_sample=True, top_p=0.92, top_k=0) def custom_to_pil(x): x = x.detach().cpu() x = torch.clamp(x, -1., 1.) x = (x + 1.)/2. x = x.permute(1,2,0).numpy() x = (255*x).astype(np.uint8) x = Image.fromarray(x) if not x.mode == "RGB": x = x.convert("RGB") return x z_idx = greedy_output[0, input_ids.shape[1]:] - 50001 z_quant = vqgan_model.quantize.get_codebook_entry(z_idx, shape=(1, 16, 16, 256)) x_rec = vqgan_model.decode(z_quant).to('cpu')[0] display(custom_to_pil(x_rec)) ``` You may also filter results based on CLIP: ```python from tqdm import tqdm def hallucinate(prompt, num_images=64): input_ids = tokenizer.encode(prompt, return_tensors='pt') input_ids = torch.cat((input_ids, torch.tensor([[50000]])), 1).to(device) # Add image generation token all_images = [] for i in tqdm(range(num_images)): greedy_output = model.generate( input_ids.to(device), max_length=256+input_ids.shape[1], do_sample=True, top_p=0.92, top_k=0) z_idx = greedy_output[0, input_ids.shape[1]:] - 50001 z_quant = vqgan_model.quantize.get_codebook_entry(z_idx, shape=(1, 16, 16, 256)) x_rec = vqgan_model.decode(z_quant).to('cpu')[0] all_images.append(custom_to_pil(x_rec)) return all_images input_sentence = "Une carte de l'europe" all_images = hallucinate(input_sentence) from transformers import pipeline opus_model = "Helsinki-NLP/opus-mt-fr-en" opus_translator = pipeline("translation", model=opus_model) opus_translator(input_sentence) from transformers import CLIPProcessor, CLIPModel clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") clip_processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") def clip_top_k(prompt, images, k=8): prompt_fr = opus_translator(input_sentence)[0]['translation_text'] inputs = clip_processor(text=prompt_fr, images=images, return_tensors="pt", padding=True) outputs = clip_model(**inputs) logits = outputs.logits_per_text # this is the image-text similarity score scores = np.array(logits[0].detach()).argsort()[-k:][::-1] return [images[score] for score in scores] filtered_images = clip_top_k(input_sentence, all_images) for fi in filtered_images: display(fi) ``` ## Training data We created a dedicated corpus to train our generative model. The training corpus consists in text-image pairs. We aggregated portions from existing corpora: [Laion-5B](https://laion.ai/blog/laion-5b/) and [WIT](https://github.com/google-research-datasets/wit). The final dataset includes 10,807,534 samples. ## Training procedure We pre-trained the model on the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/) supercomputer. We perform the training within a total of 140 hours of computation on Tesla V-100 hardware (TDP of 300W). The training was distributed on 8 compute nodes of 8 GPUs. We used data parallelization in order to divide each micro-batch on the computing units. We estimated the total emissions at 1161.22 kgCO2eq, using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al., (2019)](lacoste-2019).
heriosousa/a2c-AntBulletEnv-v0
heriosousa
2022-07-27T17:03:12Z
1
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-27T17:02:08Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 1020.71 +/- 201.31 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Evelyn18/roberta-base-spanish-squades-becasIncentivos4
Evelyn18
2022-07-27T16:52:12Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:becasv2", "endpoints_compatible", "region:us" ]
question-answering
2022-07-27T15:56:33Z
--- tags: - generated_from_trainer datasets: - becasv2 model-index: - name: roberta-base-spanish-squades-becasIncentivos4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-spanish-squades-becasIncentivos4 This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 1.7734 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 11 | 1.8136 | | No log | 2.0 | 22 | 1.7734 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
mariastull/Reinforce-1
mariastull
2022-07-27T16:29:13Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-07-27T16:29:03Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-1 results: - metrics: - type: mean_reward value: 11.90 +/- 1.81 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
Go2Heart/BERT_Mod_1
Go2Heart
2022-07-27T16:17:44Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-27T16:07:33Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: BERT_Mod_1 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.541934635424655 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_Mod_1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.1787 - Matthews Correlation: 0.5419 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.1616 | 1.0 | 535 | 0.9278 | 0.4979 | | 0.1128 | 2.0 | 1070 | 1.0487 | 0.5046 | | 0.0712 | 3.0 | 1605 | 1.0155 | 0.5306 | | 0.0952 | 4.0 | 2140 | 1.1860 | 0.5147 | | 0.0698 | 5.0 | 2675 | 1.1787 | 0.5419 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.4.0 - Tokenizers 0.12.1
huggingtweets/interiordesign
huggingtweets
2022-07-27T15:30:24Z
71
4
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-27T15:21:57Z
--- language: en thumbnail: http://www.huggingtweets.com/interiordesign/1658935819881/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1544346507578589184/x9URB7Yy_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Interior Design</div> <div style="text-align: center; font-size: 14px;">@interiordesign</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Interior Design. | Data | Interior Design | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 97 | | Short tweets | 2 | | Tweets kept | 3151 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/vl5m9w7s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @interiordesign's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/36lgkxh5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/36lgkxh5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/interiordesign') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
annahaz/xlm-roberta-base-finetuned-misogyny-sexism
annahaz
2022-07-27T14:45:20Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-05T19:00:29Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: xlm-roberta-base-finetuned-misogyny-sexism results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-misogyny-sexism This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9064 - Accuracy: 0.8334 - F1: 0.3322 - Precision: 0.2498 - Recall: 0.4961 - Mae: 0.1666 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:| | 0.3869 | 1.0 | 2395 | 0.2905 | 0.8778 | 0.3528 | 0.3164 | 0.3988 | 0.1222 | | 0.3539 | 2.0 | 4790 | 0.4143 | 0.8278 | 0.3465 | 0.2536 | 0.5467 | 0.1722 | | 0.3124 | 3.0 | 7185 | 0.3327 | 0.8568 | 0.3583 | 0.2864 | 0.4786 | 0.1432 | | 0.2817 | 4.0 | 9580 | 0.5621 | 0.7329 | 0.3092 | 0.1972 | 0.7160 | 0.2671 | | 0.2651 | 5.0 | 11975 | 0.4376 | 0.8520 | 0.3607 | 0.2821 | 0.5 | 0.1480 | | 0.2249 | 6.0 | 14370 | 0.5581 | 0.8326 | 0.3312 | 0.2485 | 0.4961 | 0.1674 | | 0.1958 | 7.0 | 16765 | 0.6728 | 0.8382 | 0.3234 | 0.2484 | 0.4630 | 0.1618 | | 0.1899 | 8.0 | 19160 | 0.7404 | 0.8304 | 0.3316 | 0.2471 | 0.5039 | 0.1696 | | 0.1619 | 9.0 | 21555 | 0.8309 | 0.8461 | 0.3382 | 0.2639 | 0.4708 | 0.1539 | | 0.1453 | 10.0 | 23950 | 0.9064 | 0.8334 | 0.3322 | 0.2498 | 0.4961 | 0.1666 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1