modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-14 06:27:53
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
519 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-14 06:27:45
card
stringlengths
11
1.01M
acidhills/sd-class-butterflies-32
acidhills
2022-12-18T14:55:17Z
6
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2022-12-18T14:54:51Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('acidhills/sd-class-butterflies-32') image = pipeline().images[0] image ```
ziemke/q-Taxi-v3
ziemke
2022-12-18T14:46:16Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T14:15:02Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ziemke/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
lambdaofgod/document_nbow_embedder
lambdaofgod
2022-12-18T14:45:13Z
0
0
sentence-transformers
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-12-18T14:45:06Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # lambdaofgod/document_nbow_embedder This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('lambdaofgod/document_nbow_embedder') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/document_nbow_embedder) ## Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(84046, 200) ) (1): WordWeights( (emb_layer): Embedding(84046, 1) ) (2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
lambdaofgod/query_nbow_embedder
lambdaofgod
2022-12-18T14:44:55Z
0
0
sentence-transformers
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-12-18T14:44:50Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # lambdaofgod/query_nbow_embedder This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('lambdaofgod/query_nbow_embedder') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/query_nbow_embedder) ## Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(6912, 200) ) (1): WordWeights( (emb_layer): Embedding(6912, 1) ) (2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
RedPandaAINLP/Taxi-v3-lr05-ms199-ep1M
RedPandaAINLP
2022-12-18T14:00:38Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T14:00:31Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-lr05-ms199-ep1M results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="RedPandaAINLP/Taxi-v3-lr05-ms199-ep1M", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Finnish-NLP/whisper-large-v2-finnish
Finnish-NLP
2022-12-18T13:57:57Z
17
1
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "finnish", "fi", "dataset:mozilla-foundation/common_voice_11_0", "dataset:google/fleurs", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-17T11:11:25Z
--- language: - fi license: apache-2.0 tags: - whisper-event - finnish datasets: - mozilla-foundation/common_voice_11_0 - google/fleurs metrics: - wer - cer model-index: - name: Whisper Large V2 Finnish results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: fi split: test args: fi metrics: - name: Wer type: wer value: 10.42 - name: Cer type: cer value: 1.91 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: FLEURS type: google/fleurs config: fi_fi split: test args: fi_fi metrics: - name: Wer type: wer value: 10.2 - name: Cer type: cer value: 3.36 ---
Payoto/t5-small-finetuned-xsum
Payoto
2022-12-18T13:51:34Z
84
0
transformers
[ "transformers", "pytorch", "optimum_graphcore", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-14T18:25:48Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum model-index: - name: t5-small-finetuned-xsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.5273 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - total_eval_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - training precision: Mixed Precision ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6962 | 1.0 | 3188 | 2.5273 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.0+cpu - Datasets 2.7.1 - Tokenizers 0.12.1
MaxReimann/WISE-APDrawing-XDoG
MaxReimann
2022-12-18T13:49:32Z
0
0
null
[ "arxiv:2207.14606", "license:mit", "region:us" ]
null
2022-12-18T13:00:40Z
--- license: mit --- This model can create line-drawings using an xDoG algorithmic effect with predicted parametrizations. [Demo space of framework](https://huggingface.co/spaces/MaxReimann/Whitebox-Style-Transfer-Editing) <br> Code for framework: [https://github.com/winfried-ripken/wise](https://github.com/winfried-ripken/wise) <br> Paper on Arxiv: [arxiv/2207.14606](https://arxiv.org/abs/2207.14606) <img src='https://huggingface.co/MaxReimann/WISE-APDrawing-XDoG/resolve/main/xdog_apdrawing.jpg'/> [WISE: Whitebox Image Stylization by Example-based Learning](https://ivpg.hpi3d.de/wise) [Winfried Lötzsch](https://scholar.google.de/citations?user=wAVKdLcAAAAJ&hl=de)\*<sup>1</sup>, [Max Reimann](https://hpi.de/doellner/people/max-reimann.html)\*<sup>1</sup>, [Martin Büßemeyer](https://www.researchgate.net/profile/Martin-Buessemeyer)<sup>1</sup>, [Amir Semmo](http://asemmo.github.io/)<sup>2</sup>, [Jürgen Döllner](https://hpi.de/forschung/fachgebiete/computergrafische-systeme.html)<sup>1</sup>, [Matthias Trapp](https://hpi.de/doellner/people/trapp.html)<sup>1</sup> <br> <sup>1</sup>Hasso Plattner Institute, University of Potsdam, Germany, <sup>2</sup>Digitalmasterpieces GmbH, Germany<br/> \*denotes equal contribution in ECCV 2022 ``` latex @InProceedings{loetzsch2022wise, title={WISE: Whitebox Image Stylization by Example-based Learning}, author={Lötzsch, Winfried and Reimann, Max and Büssemeyer, Martin and Semmo, Amir and Döllner, Jürgen and Trapp, Matthias}, title="WISE: Whitebox Image Stylization by Example-Based Learning", booktitle="Computer Vision -- ECCV 2022", year="2022" } ```
RedPandaAINLP/Taxi-v3-lr05-ms199
RedPandaAINLP
2022-12-18T13:43:46Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T13:41:05Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-lr05-ms199 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="RedPandaAINLP/Taxi-v3-lr05-ms199", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Honza/Taxi-v3
Honza
2022-12-18T13:41:40Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T13:41:32Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Honza/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
RedPandaAINLP/q-FrozenLake-v1-4x4-noSlippery
RedPandaAINLP
2022-12-18T13:34:40Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T13:34:30Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="RedPandaAINLP/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Payoto/roberta-base-finetuned-squad
Payoto
2022-12-18T13:28:43Z
67
0
transformers
[ "transformers", "pytorch", "optimum_graphcore", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-11-17T18:40:43Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-squad This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.25 - num_epochs: 3 - training precision: Mixed Precision ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.0+cpu - Datasets 2.7.1 - Tokenizers 0.12.1
ahmadmwali/finetuning-sentiment-hausa21
ahmadmwali
2022-12-18T13:24:52Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-18T10:58:31Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-hausa21 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-hausa21 This model is a fine-tuned version of [mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1444 - Accuracy: 0.9586 - F1: 0.9586 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
Honza/q-FrozenLake-v1-4x4-noSlippery
Honza
2022-12-18T13:16:44Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T13:16:40Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Honza/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Payoto/roberta-base-finetuned-swag
Payoto
2022-12-18T13:12:48Z
36
0
transformers
[ "transformers", "pytorch", "optimum_graphcore", "roberta", "generated_from_trainer", "dataset:swag", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-11-14T12:16:42Z
--- license: mit tags: - generated_from_trainer datasets: - swag metrics: - accuracy model-index: - name: roberta-base-finetuned-swag results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-swag This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the swag dataset. It achieves the following results on the evaluation set: - Loss: 0.4382 - Accuracy: 0.8390 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 40 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - training precision: Mixed Precision ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5707 | 1.0 | 574 | 0.4990 | 0.8097 | | 0.5092 | 2.0 | 1148 | 0.4321 | 0.8361 | | 0.3597 | 3.0 | 1722 | 0.4382 | 0.8390 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.0+cpu - Datasets 2.7.1 - Tokenizers 0.12.1
leviethoang/wav2vec2-large-xls-r-300m-vi-75p
leviethoang
2022-12-18T13:09:07Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-18T09:25:35Z
--- tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-vi-75p results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-vi-75p This model is a fine-tuned version of [leviethoang/wav2vec2-large-xls-r-300m-vi-25p](https://huggingface.co/leviethoang/wav2vec2-large-xls-r-300m-vi-25p) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.7880 - Wer: 0.4324 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.962 | 1.68 | 400 | 1.2033 | 0.4428 | | 0.7977 | 3.36 | 800 | 1.3410 | 0.4731 | | 0.644 | 5.04 | 1200 | 1.4682 | 0.4796 | | 0.5156 | 6.72 | 1600 | 1.4940 | 0.4826 | | 0.4531 | 8.4 | 2000 | 1.5071 | 0.4734 | | 0.3882 | 10.08 | 2400 | 1.5408 | 0.4694 | | 0.3469 | 11.76 | 2800 | 1.5975 | 0.4697 | | 0.3096 | 13.45 | 3200 | 1.7120 | 0.4728 | | 0.2825 | 15.13 | 3600 | 1.7052 | 0.4632 | | 0.2607 | 16.81 | 4000 | 1.6870 | 0.4575 | | 0.2301 | 18.49 | 4400 | 1.7205 | 0.4653 | | 0.2096 | 20.17 | 4800 | 1.7352 | 0.4504 | | 0.1915 | 21.85 | 5200 | 1.7948 | 0.4465 | | 0.1685 | 23.53 | 5600 | 1.7994 | 0.4400 | | 0.1543 | 25.21 | 6000 | 1.7613 | 0.4435 | | 0.1378 | 26.89 | 6400 | 1.8300 | 0.4365 | | 0.1278 | 28.57 | 6800 | 1.7880 | 0.4324 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
MrDivakaruni/ppo-LunarLander-v2
MrDivakaruni
2022-12-18T12:49:26Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T10:44:56Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 237.63 +/- 20.23 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Payoto/roberta-base-finetuned-cola
Payoto
2022-12-18T12:41:07Z
5
0
transformers
[ "transformers", "pytorch", "optimum_graphcore", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-14T12:00:49Z
--- license: mit tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: roberta-base-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-cola This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5127 - Matthews Correlation: 0.5815 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - total_eval_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - training precision: Mixed Precision ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.3138 | 1.0 | 133 | 0.4104 | 0.5538 | | 0.2082 | 2.0 | 266 | 0.4849 | 0.5402 | | 0.1687 | 3.0 | 399 | 0.5127 | 0.5815 | | 0.0865 | 4.0 | 532 | 0.5752 | 0.5661 | | 0.1008 | 5.0 | 665 | 0.5952 | 0.5764 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.0+cpu - Datasets 2.7.1 - Tokenizers 0.12.1
ales/whisper-base-belarusian
ales
2022-12-18T12:35:28Z
20
1
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "be", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-16T21:30:37Z
--- language: - be license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Base Belarusian results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 be type: mozilla-foundation/common_voice_11_0 config: be split: validation args: be metrics: - name: Wer type: wer value: 12.206885082321635 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Base Belarusian This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the mozilla-foundation/common_voice_11_0 be dataset. It achieves the following results on the evaluation set: - Loss: 0.1080 - Wer: 12.2069 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 6000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2445 | 0.17 | 1000 | 0.3059 | 32.4163 | | 0.1823 | 0.33 | 2000 | 0.2004 | 22.1259 | | 0.1412 | 0.5 | 3000 | 0.1752 | 20.0700 | | 0.1093 | 0.67 | 4000 | 0.1413 | 16.0533 | | 0.1137 | 0.83 | 5000 | 0.1155 | 13.3108 | | 0.0585 | 1.1 | 6000 | 0.1080 | 12.2069 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
tashatsar/ppo-LunarLander-v2-updates
tashatsar
2022-12-18T12:31:14Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T12:30:45Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 296.52 +/- 12.94 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
kohya-ss/kawase-hasui-diffusion
kohya-ss
2022-12-18T12:29:59Z
0
14
null
[ "stable-diffusion", "text-to-image", "en", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2022-12-18T11:02:11Z
--- license: creativeml-openrail-m language: - en tags: - stable-diffusion - text-to-image --- Kawase Hasui Diffusion is trained on pantings by [KAWASE Hasui(川瀬巴水)](https://en.wikipedia.org/wiki/Hasui_Kawase). The model has been trained on Stable Diffusion v2-1 with DreamBooth method with a learning rate of 1.0e-6 for 2,600 steps with the batch size of 8 (8 train or reg images) on 169 training images and 664 regularization images. This model is based on SD2.1 768/v, so if you use this model in the poplular Web UI, please rename 'v2-inference-v.yaml' to 'kawase-hasui-epoch-000003.yaml' (or ~_fp16.yaml) and place it to the same folder to .safetensors. The training prompt is "picture by lvl". ## Examples ![Japan tourism poster](./sample1.png) ``` picture by lvl, japan tourism poster seed : 968191097, sampler: k_euler_a, steps : 160, CFG scale : 5.5 ``` ![Cyberpunk Akihabara](./sample2.png) ``` picture by lvl, cyberpunk akihabara seed : 1418478714, sampler: k_euler_a, steps : 160, CFG scale : 5.5 ``` ![Ruined castle](./sample3.png) ``` picture by lvl, ruined castle, fantasy, dawn seed : 897433524, sampler: k_euler_a, steps : 160, CFG scale : 5.5 ``` ![Party of adventurers](./sample4.png) ``` picture by lvl, fantasy, party of adventurers, ready to fight, in front of ruined temple seed : 1814292911, sampler: k_euler_a, steps : 160, CFG scale : 5.5 ``` ## License CreativeML Open RAIL-M
tagotec/ppo-LunarLander-v2
tagotec
2022-12-18T12:20:00Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T12:19:36Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 257.91 +/- 16.15 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
arampacha/whisper-large-hy
arampacha
2022-12-18T12:15:04Z
7
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "hy", "dataset:mozilla-foundation/common_voice_11_0", "dataset:google/fleurs", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-14T11:27:53Z
--- language: - hy license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 - google/fleurs metrics: - wer model-index: - name: whisper-base-hy results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: hy-AM split: test args: hy-AM metrics: - name: Wer type: wer value: 22.36842105263158 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-hy This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2204 - Wer: 22.3684 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1394 | 5.87 | 400 | 0.1780 | 28.2895 | | 0.0536 | 11.75 | 800 | 0.1739 | 24.6053 | | 0.0247 | 17.64 | 1200 | 0.2098 | 22.9605 | | 0.0154 | 23.52 | 1600 | 0.2035 | 22.1382 | | 0.0103 | 29.41 | 2000 | 0.2204 | 22.3684 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
Daehoon/PPO-LunarLander-v2
Daehoon
2022-12-18T12:05:10Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T12:04:37Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.77 +/- 28.24 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jlondonobo/whisper-large-v2-es
jlondonobo
2022-12-18T11:32:26Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "es", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-18T04:30:24Z
--- language: - es license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Large V2 Spanish results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 es type: mozilla-foundation/common_voice_11_0 config: es split: test args: es metrics: - name: Wer type: wer value: 5.074450392391248 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large V2 Spanish This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 es dataset. It achieves the following results on the evaluation set: - Loss: 0.1648 - Wer: 5.0745 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1556 | 0.5 | 750 | 0.1683 | 5.0959 | | 0.1732 | 1.35 | 1500 | 0.1648 | 5.0745 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
ruzarx/Taxi
ruzarx
2022-12-18T11:09:34Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T09:29:50Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ruzarx/Taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
anuragshas/whisper-large-v2-ml
anuragshas
2022-12-18T11:02:10Z
3
1
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "ml", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-12T20:46:25Z
--- language: - ml license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Large-v2 Malayalam results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 ml type: mozilla-foundation/common_voice_11_0 config: ml split: test args: ml metrics: - name: Wer type: wer value: 25.478927203065133 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large-v2 Malayalam This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 ml dataset. It achieves the following results on the evaluation set: - Loss: 0.4170 - Wer: 25.4789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0 | 71.01 | 1000 | 0.4170 | 25.4789 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
heegyu/kobart-text-style-transfer
heegyu
2022-12-18T10:39:49Z
95
6
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-18T09:59:31Z
kobart 모델에 Korean Smilestyle Dataset을 파인튜닝한 한국어 텍스트 스타일 변환 모델 예제 ``` styles = ['문어체','구어체','안드로이드','아재','채팅', '초등학생','이모티콘','enfp','신사','할아버지','할머니','중학생', '왕','나루토','선비','소심한','번역기'] model = pipeline( 'text2text-generation', model='heegyu/kobart-text-style-transfer' ) def transfer_text_style(text, target_style, **kwargs): input = f"{target_style} 말투로 변환:{text}" out = model(input, max_length=64, **kwargs) print(text, target_style, out[0]['generated_text'], sep="->") text = "반가운. 나는 6마리의 고양이를 소지하고 있다." for style in styles: transfer_text_style(text, style) ``` 결과 ``` 반가운. 나는 6마리의 고양이를 소지하고 있다.->문어체->안녕하세요. 저는 6마리의 고양이를 가지고 있습니다. 반가운. 나는 6마리의 고양이를 소지하고 있다.->구어체->안녕. 나는 6마리의 고양이를 가지고 있어. 반가운. 나는 6마리의 고양이를 소지하고 있다.->안드로이드->반갑다. 안드로이드. 6마리. 고양이. 보유. 반가운. 나는 6마리의 고양이를 소지하고 있다.->아재->안녕~~~~ 6마리의 고양이를 가지고 있네 반가운. 나는 6마리의 고양이를 소지하고 있다.->채팅->하이~ 6마리의 고양이 있음 반가운. 나는 6마리의 고양이를 소지하고 있다.->초등학생->ᄒᄋ 난 6마리 고양이 ᄏᄏ 반가운. 나는 6마리의 고양이를 소지하고 있다.->이모티콘->안녕!~()~ 난 6마리의 고양이를 가지고 있어 (皿) 반가운. 나는 6마리의 고양이를 소지하고 있다.->enfp->안녕!!~ 난 6마리의 고양이를 둬! 반가운. 나는 6마리의 고양이를 소지하고 있다.->신사->안녕하십니까, 저는 6마리의 고양이를 가지고 있습니다. 반가운. 나는 6마리의 고양이를 소지하고 있다.->할아버지->안녕하신가...나는 6마리의 고양이를 가지고 있구먼... 반가운. 나는 6마리의 고양이를 소지하고 있다.->할머니->염병 염병할 고양이 놈이여 반가운. 나는 6마리의 고양이를 소지하고 있다.->중학생->ᄒᄋ 난 6마리 고양이 키우는데 반가운. 나는 6마리의 고양이를 소지하고 있다.->왕->반갑소. 짐은 6마리의 고양이를 소유하고 있소. 반가운. 나는 6마리의 고양이를 소지하고 있다.->나루토->안녕하냐니깐! 난 6마리의 고양이를 가지고 있다니깐! 반가운. 나는 6마리의 고양이를 소지하고 있다.->선비->안녕하시오! 소생은 6마리의 고양이를 가지고 있소! 반가운. 나는 6마리의 고양이를 소지하고 있다.->소심한->안녕.... 난 6마리 고양이 있어.. 반가운. 나는 6마리의 고양이를 소지하고 있다.->번역기->반가운, 나는 6마리의 고양이를 가지고 있다. ```
emmyapi/distilbart-podimo-data-5
emmyapi
2022-12-18T10:25:14Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "Summarization", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-15T16:26:15Z
--- tasks: summarization license: apache-2.0 tags: - generated_from_trainer - Summarization model-index: - name: distilbart-podimo-data-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbart-podimo-data-5 This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.1325 ## Model description model | rouge1 | rouge2 | rougeL | rougeLsum --- | --- | --- | --- |--- sshleifer/distilbart-cnn-12-6 | 0.202654 | 0.025766 | 0.123072 | 0.130183 emmyapi/distilbart-podimo-data-3 | 0.235147 | 0.047087 | 0.151535 | 0.161782 emmyapi/distilbart-podimo-data-4 | 0.236926 | 0.048327 | 0.153539 | 0.165026 emmyapi/distilbart-podimo-data-5 | 0.259024 | 0.061665 | 0.167187 | 0.178399 emmyapi/distilbart-podimo-data-7 | 0.298888 | 0.059900 | 0.159479 | 0.185049 ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.3477 | 3.33 | 500 | 3.7027 | | 2.6286 | 6.66 | 1000 | 3.6995 | | 2.0718 | 10.0 | 1500 | 3.8868 | | 1.7806 | 13.33 | 2000 | 4.1325 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.11.0 - Datasets 2.2.1 - Tokenizers 0.12.1
vaibhav9/distilbert-base-uncased-finetuned-squad
vaibhav9
2022-12-18T10:05:13Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-12-17T11:18:33Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5239 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.4335 | 1.0 | 3748 | 1.4521 | | 1.0869 | 2.0 | 7496 | 1.4054 | | 0.8612 | 3.0 | 11244 | 1.5239 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
philschmid/tiny-distilbert-classification
philschmid
2022-12-18T10:04:35Z
55
2
transformers
[ "transformers", "pytorch", "tf", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Test model > ## This model is used to run tests for the Hugging Face DLCs
ginton/ppo-LunarLander-v2
ginton
2022-12-18T10:02:13Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T20:52:40Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 239.00 +/- 67.42 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ruzarx/q-FrozenLake-v1-4x4-noSlippery
ruzarx
2022-12-18T09:26:37Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T09:26:19Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ruzarx/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
dhandapanip/Ss
dhandapanip
2022-12-18T08:09:02Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2022-12-18T08:09:02Z
--- license: bigscience-bloom-rail-1.0 ---
nu-dialogue/sfc2022-stable-diffusion
nu-dialogue
2022-12-18T07:20:46Z
16
3
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "ja", "japanese", "arxiv:2112.10752", "license:other", "diffusers:JapaneseStableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-18T04:50:44Z
--- language: ja license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - ja - japanese inference: true # extra_gated_prompt: |- # One more step before getting this model. # This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. # The CreativeML OpenRAIL License specifies: # 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content # 2. rinna Co., Ltd. claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license # 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) # Please read the full license here: https://huggingface.co/spaces/CompVis/stable-diffusion-license # By clicking on "Access repository" below, you accept that your *contact information* (email address and username) can be shared with the model authors as well. # extra_gated_fields: # I have read the License and agree with its terms: checkbox --- # SFCOCO Stable Diffusion Model Card SFCOCO Stable Diffusion is a Japanese-specific latent text-to-image diffusion model capable of generating photo-realistic images given any text input. This model was fine-tuned by using a powerful Japanese-specific latent text-to-image diffusion model, [Japanese Stable Diffusion](https://huggingface.co/rinna/japanese-stable-diffusion). We use the [Stable Diffusion text-to-image fine-tuning script](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) of [🤗 Diffusers](https://github.com/huggingface/diffusers) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nu-dialogue/clip-prefix-caption-jp/blob/master/notebooks/sfc2022_stable_diffusion.ipynb) ## Model Details - **Developed by:** Atsumoto Ohashi - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** Japanese - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model (LDM)](https://arxiv.org/abs/2112.10752) that used [Japanese Stable Diffusion](https://huggingface.co/rinna/japanese-stable-diffusion) as a pre-trained model. - **Resources for more information:** [Japanese Stable Diffusion GitHub Repository](https://github.com/rinnakk/japanese-stable-diffusion) ## Examples Firstly, install our package as follows. This package is modified [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Japanese Stable Diffusion. ```bash pip install git+https://github.com/rinnakk/japanese-stable-diffusion ``` Run this command to log in with your HF Hub token if you haven't before: ```bash huggingface-cli login ``` Running the pipeline with the k_lms scheduler: ```python import torch from torch import autocast from diffusers import LMSDiscreteScheduler from japanese_stable_diffusion import JapaneseStableDiffusionPipeline model_id = "nu-dialogue/sfc2022-stable-diffusion" device = "cuda" # Use the K-LMS scheduler here instead scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000) pipe = JapaneseStableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, use_auth_token=True, torch_dtype=torch.float16) pipe = pipe.to(device) prompt = "福澤諭吉像の写真" with autocast("cuda"): image = pipe(prompt, guidance_scale=7.5)["sample"][0] image.save("output.png") ``` _Note: `JapaneseStableDiffusionPipeline` is almost same as diffusers' `StableDiffusionPipeline` but added some lines to initialize our models properly._ ## Training **Training Data** We used the SFCOCO2021 and SFCOCO2022 dataset for training the model. You can see these datasets in [this repository](https://github.com/nu-dialogue/clip-prefix-caption-jp). **Training Procedure** SFCOCO Stable Diffusion has the same architecture as Japanese Stable Diffusion and was trained by using Japanese Stable Diffusion. We use the [Stable Diffusion text-to-image fine-tuning script](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) of [🤗 Diffusers](https://github.com/huggingface/diffusers) ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` ```bibtex @misc{japanese_stable_diffusion, author = {Shing, Makoto and Sawada, Kei}, title = {Japanese Stable Diffusion}, howpublished = {\url{https://github.com/rinnakk/japanese-stable-diffusion}}, month = {September}, year = {2022}, } ``` *This model card was written by: Atsumoto Ohashi and is based on the [Japanese Stable Diffusion Model Card](https://github.com/rinnakk/japanese-stable-diffusion).*
vjkrish/lunarLander
vjkrish
2022-12-18T04:24:06Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T04:11:38Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -606.02 +/- 190.89 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Scrya/whisper-medium-id
Scrya
2022-12-18T04:09:37Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "id", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-16T16:41:14Z
--- language: - id license: apache-2.0 tags: - whisper-event - generated_from_trainer model-index: - name: Whisper Medium ID - FLEURS-CV results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: google/fleurs type: google/fleurs config: id_id split: test metrics: - type: wer value: 7.8 name: WER - type: cer value: 2.43 name: CER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: mozilla-foundation/common_voice_11_0 type: mozilla-foundation/common_voice_11_0 config: id split: test metrics: - type: wer value: 8.67 name: WER - type: cer value: 2.71 name: CER --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Medium ID - FLEURS-CV This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2563 - eval_wer: 8.4690 - eval_runtime: 2961.9108 - eval_samples_per_second: 1.453 - eval_steps_per_second: 0.091 - epoch: 14.29 - step: 5000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
wooihen/ppo-LunarLander-v2-TEST2
wooihen
2022-12-18T03:43:24Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-15T17:53:51Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 249.95 +/- 11.94 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sumedh/biomedical_text_summarization
sumedh
2022-12-18T03:11:14Z
14
2
transformers
[ "transformers", "pytorch", "longt5", "text2text-generation", "summarization", "en", "dataset:sumedh/MeQSum", "model-index", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-12-18T00:23:04Z
--- tags: - summarization language: - en widget: - text: Type your medical text here. 🤗 datasets: - sumedh/MeQSum co2_eq_emissions: emissions: 3198.3976606503647 model-index: - name: sumedh/biomedical_text_summarization results: - task: type: summarization name: Summarization metrics: - name: ROUGE-1 type: rouge value: 39.4086 verified: true - name: ROUGE-2 type: rouge value: 12.8115 verified: true - name: ROUGE-L type: rouge value: 21.9191 verified: true - name: ROUGE-LSUM type: rouge value: 35.2431 verified: true - name: loss type: loss value: 2.2001051902770996 verified: true - name: gen_len type: gen_len value: 133.8541 verified: true --- This model was created for text summarization for clinical text. Check the index for evaluation scores on the ROUGE metric.
mustfkeskin/ppo-LunarLander-v2
mustfkeskin
2022-12-18T02:46:25Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T02:45:59Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 247.56 +/- 22.06 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
nlp-cimat/politibeto
nlp-cimat
2022-12-18T02:45:19Z
10
4
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "masked-lm", "es", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-06-20T18:09:39Z
--- language: - es tags: - masked-lm widget: - text: "La mayor ventaja de la democracia es su [MASK]." example_title: "Ejemplo 1" --- # PolitiBETO: A Spanish BERT adapted to a language domain of Political Tweets PolitiBETO is a [BERT model](https://github.com/google-research/bert) tailored for political tasks in social media corpora. It is a Domain Adaptation on top of [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased), a pretrained BERT in Spanish. This model is meant to be fine-tuned for downstream tasks. ## Citation [NLP-CIMAT at PoliticEs 2022: PolitiBETO, a Domain-Adapted Transformer for Multi-class Political Author Profiling](https://ceur-ws.org/Vol-3202/politices-paper2.pdf) To cite this in a publication please use the following: ``` @inproceedings{PolitiBeto2022, title={{NLP-CIMAT} at {P}olitic{E}s 2022: {P}oliti{BETO}, a {D}omain-{A}dapted {T}ransformer for {M}ulti-class {P}olitical {A}uthor {P}rofiling}, author={Emilio Villa-Cueva and Ivan Gonz{\'a}lez-Franco and Fernando Sanchez-Vega and Adri{\'a}n Pastor L{\'o}pez-Monroy}, booktitle={Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2022)}, series = {{CEUR} Workshop Proceedings}, publisher = {CEUR-WS}, year={2022} } ```
AinTziLLo/ppo-LunarLander-v2
AinTziLLo
2022-12-18T02:27:35Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T01:11:11Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 285.39 +/- 21.24 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
greedypiggy/ppo-Huggy
greedypiggy
2022-12-18T01:59:16Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2022-12-18T01:59:08Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: greedypiggy/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
geninhu/whisper-medium-gl
geninhu
2022-12-18T01:20:31Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "gl", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-17T14:24:54Z
--- language: - gl license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Medium Galician results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 gl type: mozilla-foundation/common_voice_11_0 config: gl split: test args: gl metrics: - name: Wer type: wer value: 8.41678391128031 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Medium Galician This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 gl dataset. It achieves the following results on the evaluation set: - Loss: 0.2864 - Wer: 8.4168 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0074 | 6.01 | 1000 | 0.2564 | 8.8927 | | 0.0006 | 12.03 | 2000 | 0.2864 | 8.4168 | | 0.0003 | 19.01 | 3000 | 0.3043 | 8.5078 | | 0.0002 | 25.02 | 4000 | 0.3145 | 8.4913 | | 0.0002 | 32.01 | 5000 | 0.3189 | 8.4706 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
jlondonobo/whisper-large-v2-pt-v3
jlondonobo
2022-12-18T01:19:32Z
14
5
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "pt", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-17T18:57:25Z
--- language: - pt license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Large Portuguese results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 pt type: mozilla-foundation/common_voice_11_0 config: pt split: test args: pt metrics: - name: Wer type: wer value: 4.8385198634858195 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large Portuguese This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 pt dataset. It achieves the following results on the evaluation set: - Loss: 0.1503 - Wer: 4.8385 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - training_steps: 1500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1526 | 0.33 | 500 | 0.1588 | 4.9074 | | 0.1046 | 1.3 | 1000 | 0.1510 | 4.8806 | | 0.079 | 2.28 | 1500 | 0.1503 | 4.8385 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
toastedshibe/ppo-LunarLander-v2
toastedshibe
2022-12-18T00:30:25Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-18T00:20:14Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.60 +/- 15.87 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
pittawat/autotrain-twitter-covid-19-spam-detection-2512177276
pittawat
2022-12-18T00:20:04Z
1
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "en", "dataset:pittawat/autotrain-data-twitter-covid-19-spam-detection", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-12-18T00:19:06Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - pittawat/autotrain-data-twitter-covid-19-spam-detection co2_eq_emissions: emissions: 1.0218403202204225 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 2512177276 - CO2 Emissions (in grams): 1.0218 ## Validation Metrics - Loss: 0.275 - Accuracy: 0.906 - Precision: 0.930 - Recall: 0.960 - AUC: 0.882 - F1: 0.945 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/pittawat/autotrain-twitter-covid-19-spam-detection-2512177276 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("pittawat/autotrain-twitter-covid-19-spam-detection-2512177276", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("pittawat/autotrain-twitter-covid-19-spam-detection-2512177276", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
PawanUP85/Arcpro
PawanUP85
2022-12-17T23:40:10Z
0
0
null
[ "license:bsd-3-clause-clear", "region:us" ]
null
2022-12-17T23:39:15Z
--- license: bsd-3-clause-clear --- git lfs install git clone https://huggingface.co/PawanUP85/Arcpro
Balthamos/chantum-test-q
Balthamos
2022-12-17T23:36:38Z
2
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-17T03:35:15Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: chantum1 --- ### Chantum Test q Dreambooth model trained by Balthamos with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-768 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! chantum1 (use that on your prompt) ![chantum1 0](https://huggingface.co/Balthamos/chantum-test-q/resolve/main/concept_images/chantum1_%281%29.jpg)
camenduru/xformers-hf-a10g
camenduru
2022-12-17T23:29:19Z
0
0
null
[ "region:us" ]
null
2022-12-05T11:22:57Z
--- title: xformers-hf-a10g emoji: 🚀 colorFrom: indigo colorTo: indigo pinned: false --- https://github.com/camenduru/stable-diffusion-webui-colab/releases
DrishtiSharma/whisper-small-hindi-3k-steps
DrishtiSharma
2022-12-17T22:59:19Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "hi", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-17T20:58:54Z
--- language: - hi license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small Hindi - Drishti Sharma results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: hi split: test args: hi metrics: - name: Wer type: wer value: 16.67658639318744 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Hindi - Drishti Sharma This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3013 - Wer: 16.6766 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0188 | 3.67 | 3000 | 0.3013 | 16.6766 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
spayot/hf-drl-unit1bonus-ppo-Huggy
spayot
2022-12-17T22:40:30Z
13
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2022-12-17T22:40:18Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: spayot/hf-drl-unit1bonus-ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Cesar514/ppo-Huggy
Cesar514
2022-12-17T22:16:06Z
17
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2022-12-17T22:15:46Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: Cesar514/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AsmaAsma/my-awesome-setfit-model
AsmaAsma
2022-12-17T21:31:02Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-12-15T18:09:31Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 28, "warmup_steps": 3, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Zionamsalem/ppo-Huggy
Zionamsalem
2022-12-17T20:54:10Z
24
1
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2022-12-17T20:54:03Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: Zionamsalem/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
linksaiyajin/q-Taxi-v3
linksaiyajin
2022-12-17T20:46:17Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T20:46:08Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.62 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="linksaiyajin/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
andge/ppo-Huggy
andge
2022-12-17T20:38:21Z
13
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2022-12-17T20:38:07Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: andge/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Musha-the-Yusha/PPO-LunarLander-V2
Musha-the-Yusha
2022-12-17T20:35:12Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-16T17:10:33Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 271.35 +/- 19.79 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AgentXXX/ppo-LunarLander-v2-TEST
AgentXXX
2022-12-17T20:18:00Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T20:15:50Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 289.58 +/- 23.12 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sd-concepts-library/painting-made-by-bruegel-v4
sd-concepts-library
2022-12-17T20:02:41Z
0
4
null
[ "license:mit", "region:us" ]
null
2022-12-17T18:01:28Z
--- license: mit --- ### painting made by bruegel V4 on Stable Diffusion This version includes entire paintings, as well as close ups. This is the `<bruegel-style-artwork>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Using stabilityai/stable-diffusion-2-base Example output: ![<bruegel> 500](https://i.imgur.com/C8jcA0v.jpg) Here is the new concept you will be able to use as a `style`: ![<bruegel-style-artwork> 0](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/58.jpeg) ![<bruegel-style-artwork> 1](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/91.jpeg) ![<bruegel-style-artwork> 2](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/87.jpeg) ![<bruegel-style-artwork> 3](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/121.jpeg) ![<bruegel-style-artwork> 4](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/146.jpeg) ![<bruegel-style-artwork> 5](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/112.jpeg) ![<bruegel-style-artwork> 6](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/186.jpeg) ![<bruegel-style-artwork> 7](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/139.jpeg) ![<bruegel-style-artwork> 8](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/120.jpeg) ![<bruegel-style-artwork> 9](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/44.jpeg) ![<bruegel-style-artwork> 10](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/69.jpeg) ![<bruegel-style-artwork> 11](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/148.jpeg) ![<bruegel-style-artwork> 12](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/98.jpeg) ![<bruegel-style-artwork> 13](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/244.jpeg) ![<bruegel-style-artwork> 14](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/107.jpeg) ![<bruegel-style-artwork> 15](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/197.jpeg) ![<bruegel-style-artwork> 16](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/132.jpeg) ![<bruegel-style-artwork> 17](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/71.jpeg) ![<bruegel-style-artwork> 18](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/8.jpeg) ![<bruegel-style-artwork> 19](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/125.jpeg) ![<bruegel-style-artwork> 20](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/154.jpeg) ![<bruegel-style-artwork> 21](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/65.jpeg) ![<bruegel-style-artwork> 22](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/74.jpeg) ![<bruegel-style-artwork> 23](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/209.jpeg) ![<bruegel-style-artwork> 24](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/226.jpeg) ![<bruegel-style-artwork> 25](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/129.jpeg) ![<bruegel-style-artwork> 26](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/249.jpeg) ![<bruegel-style-artwork> 27](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/82.jpeg) ![<bruegel-style-artwork> 28](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/103.jpeg) ![<bruegel-style-artwork> 29](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/48.jpeg) ![<bruegel-style-artwork> 30](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/183.jpeg) ![<bruegel-style-artwork> 31](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/62.jpeg) ![<bruegel-style-artwork> 32](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/99.jpeg) ![<bruegel-style-artwork> 33](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/224.jpeg) ![<bruegel-style-artwork> 34](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/145.jpeg) ![<bruegel-style-artwork> 35](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/12.jpeg) ![<bruegel-style-artwork> 36](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/116.jpeg) ![<bruegel-style-artwork> 37](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/27.jpeg) ![<bruegel-style-artwork> 38](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/153.jpeg) ![<bruegel-style-artwork> 39](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/26.jpeg) ![<bruegel-style-artwork> 40](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/152.jpeg) ![<bruegel-style-artwork> 41](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/63.jpeg) ![<bruegel-style-artwork> 42](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/47.jpeg) ![<bruegel-style-artwork> 43](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/40.jpeg) ![<bruegel-style-artwork> 44](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/123.jpeg) ![<bruegel-style-artwork> 45](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/96.jpeg) ![<bruegel-style-artwork> 46](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/237.jpeg) ![<bruegel-style-artwork> 47](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/54.jpeg) ![<bruegel-style-artwork> 48](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/105.jpeg) ![<bruegel-style-artwork> 49](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/134.jpeg) ![<bruegel-style-artwork> 50](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/89.jpeg) ![<bruegel-style-artwork> 51](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/174.jpeg) ![<bruegel-style-artwork> 52](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/4.jpeg) ![<bruegel-style-artwork> 53](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/228.jpeg) ![<bruegel-style-artwork> 54](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/1.jpeg) ![<bruegel-style-artwork> 55](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/230.jpeg) ![<bruegel-style-artwork> 56](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/75.jpeg) ![<bruegel-style-artwork> 57](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/73.jpeg) ![<bruegel-style-artwork> 58](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/221.jpeg) ![<bruegel-style-artwork> 59](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/101.jpeg) ![<bruegel-style-artwork> 60](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/140.jpeg) ![<bruegel-style-artwork> 61](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/212.jpeg) ![<bruegel-style-artwork> 62](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/220.jpeg) ![<bruegel-style-artwork> 63](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/43.jpeg) ![<bruegel-style-artwork> 64](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/110.jpeg) ![<bruegel-style-artwork> 65](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/199.jpeg) ![<bruegel-style-artwork> 66](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/19.jpeg) ![<bruegel-style-artwork> 67](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/104.jpeg) ![<bruegel-style-artwork> 68](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/187.jpeg) ![<bruegel-style-artwork> 69](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/168.jpeg) ![<bruegel-style-artwork> 70](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/164.jpeg) ![<bruegel-style-artwork> 71](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/185.jpeg) ![<bruegel-style-artwork> 72](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/159.jpeg) ![<bruegel-style-artwork> 73](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/155.jpeg) ![<bruegel-style-artwork> 74](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/2.jpeg) ![<bruegel-style-artwork> 75](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/0.jpeg) ![<bruegel-style-artwork> 76](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/217.jpeg) ![<bruegel-style-artwork> 77](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/128.jpeg) ![<bruegel-style-artwork> 78](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/205.jpeg) ![<bruegel-style-artwork> 79](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/248.jpeg) ![<bruegel-style-artwork> 80](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/250.jpeg) ![<bruegel-style-artwork> 81](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/117.jpeg) ![<bruegel-style-artwork> 82](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/36.jpeg) ![<bruegel-style-artwork> 83](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/227.jpeg) ![<bruegel-style-artwork> 84](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/137.jpeg) ![<bruegel-style-artwork> 85](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/72.jpeg) ![<bruegel-style-artwork> 86](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/18.jpeg) ![<bruegel-style-artwork> 87](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/222.jpeg) ![<bruegel-style-artwork> 88](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/86.jpeg) ![<bruegel-style-artwork> 89](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/61.jpeg) ![<bruegel-style-artwork> 90](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/126.jpeg) ![<bruegel-style-artwork> 91](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/171.jpeg) ![<bruegel-style-artwork> 92](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/232.jpeg) ![<bruegel-style-artwork> 93](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/124.jpeg) ![<bruegel-style-artwork> 94](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/191.jpeg) ![<bruegel-style-artwork> 95](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/102.jpeg) ![<bruegel-style-artwork> 96](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/24.jpeg) ![<bruegel-style-artwork> 97](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/113.jpeg) ![<bruegel-style-artwork> 98](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/192.jpeg) ![<bruegel-style-artwork> 99](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/131.jpeg) ![<bruegel-style-artwork> 100](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/182.jpeg) ![<bruegel-style-artwork> 101](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/198.jpeg) ![<bruegel-style-artwork> 102](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/207.jpeg) ![<bruegel-style-artwork> 103](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/59.jpeg) ![<bruegel-style-artwork> 104](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/204.jpeg) ![<bruegel-style-artwork> 105](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/97.jpeg) ![<bruegel-style-artwork> 106](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/194.jpeg) ![<bruegel-style-artwork> 107](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/211.jpeg) ![<bruegel-style-artwork> 108](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/247.jpeg) ![<bruegel-style-artwork> 109](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/229.jpeg) ![<bruegel-style-artwork> 110](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/6.jpeg) ![<bruegel-style-artwork> 111](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/45.jpeg) ![<bruegel-style-artwork> 112](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/25.jpeg) ![<bruegel-style-artwork> 113](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/147.jpeg) ![<bruegel-style-artwork> 114](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/11.jpeg) ![<bruegel-style-artwork> 115](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/119.jpeg) ![<bruegel-style-artwork> 116](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/34.jpeg) ![<bruegel-style-artwork> 117](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/33.jpeg) ![<bruegel-style-artwork> 118](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/9.jpeg) ![<bruegel-style-artwork> 119](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/202.jpeg) ![<bruegel-style-artwork> 120](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/83.jpeg) ![<bruegel-style-artwork> 121](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/165.jpeg) ![<bruegel-style-artwork> 122](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/7.jpeg) ![<bruegel-style-artwork> 123](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/68.jpeg) ![<bruegel-style-artwork> 124](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/46.jpeg) ![<bruegel-style-artwork> 125](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/114.jpeg) ![<bruegel-style-artwork> 126](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/14.jpeg) ![<bruegel-style-artwork> 127](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/127.jpeg) ![<bruegel-style-artwork> 128](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/136.jpeg) ![<bruegel-style-artwork> 129](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/149.jpeg) ![<bruegel-style-artwork> 130](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/176.jpeg) ![<bruegel-style-artwork> 131](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/184.jpeg) ![<bruegel-style-artwork> 132](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/77.jpeg) ![<bruegel-style-artwork> 133](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/201.jpeg) ![<bruegel-style-artwork> 134](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/218.jpeg) ![<bruegel-style-artwork> 135](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/231.jpeg) ![<bruegel-style-artwork> 136](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/142.jpeg) ![<bruegel-style-artwork> 137](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/188.jpeg) ![<bruegel-style-artwork> 138](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/80.jpeg) ![<bruegel-style-artwork> 139](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/108.jpeg) ![<bruegel-style-artwork> 140](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/150.jpeg) ![<bruegel-style-artwork> 141](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/175.jpeg) ![<bruegel-style-artwork> 142](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/162.jpeg) ![<bruegel-style-artwork> 143](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/234.jpeg) ![<bruegel-style-artwork> 144](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/118.jpeg) ![<bruegel-style-artwork> 145](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/163.jpeg) ![<bruegel-style-artwork> 146](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/79.jpeg) ![<bruegel-style-artwork> 147](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/70.jpeg) ![<bruegel-style-artwork> 148](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/20.jpeg) ![<bruegel-style-artwork> 149](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/22.jpeg) ![<bruegel-style-artwork> 150](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/10.jpeg) ![<bruegel-style-artwork> 151](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/84.jpeg) ![<bruegel-style-artwork> 152](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/42.jpeg) ![<bruegel-style-artwork> 153](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/66.jpeg) ![<bruegel-style-artwork> 154](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/240.jpeg) ![<bruegel-style-artwork> 155](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/180.jpeg) ![<bruegel-style-artwork> 156](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/233.jpeg) ![<bruegel-style-artwork> 157](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/93.jpeg) ![<bruegel-style-artwork> 158](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/167.jpeg) ![<bruegel-style-artwork> 159](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/95.jpeg) ![<bruegel-style-artwork> 160](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/92.jpeg) ![<bruegel-style-artwork> 161](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/242.jpeg) ![<bruegel-style-artwork> 162](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/239.jpeg) ![<bruegel-style-artwork> 163](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/213.jpeg) ![<bruegel-style-artwork> 164](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/52.jpeg) ![<bruegel-style-artwork> 165](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/210.jpeg) ![<bruegel-style-artwork> 166](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/78.jpeg) ![<bruegel-style-artwork> 167](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/193.jpeg) ![<bruegel-style-artwork> 168](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/166.jpeg) ![<bruegel-style-artwork> 169](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/236.jpeg) ![<bruegel-style-artwork> 170](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/38.jpeg) ![<bruegel-style-artwork> 171](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/35.jpeg) ![<bruegel-style-artwork> 172](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/173.jpeg) ![<bruegel-style-artwork> 173](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/195.jpeg) ![<bruegel-style-artwork> 174](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/156.jpeg) ![<bruegel-style-artwork> 175](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/115.jpeg) ![<bruegel-style-artwork> 176](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/178.jpeg) ![<bruegel-style-artwork> 177](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/49.jpeg) ![<bruegel-style-artwork> 178](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/158.jpeg) ![<bruegel-style-artwork> 179](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/215.jpeg) ![<bruegel-style-artwork> 180](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/28.jpeg) ![<bruegel-style-artwork> 181](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/235.jpeg) ![<bruegel-style-artwork> 182](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/190.jpeg) ![<bruegel-style-artwork> 183](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/177.jpeg) ![<bruegel-style-artwork> 184](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/32.jpeg) ![<bruegel-style-artwork> 185](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/94.jpeg) ![<bruegel-style-artwork> 186](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/223.jpeg) ![<bruegel-style-artwork> 187](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/122.jpeg) ![<bruegel-style-artwork> 188](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/203.jpeg) ![<bruegel-style-artwork> 189](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/172.jpeg) ![<bruegel-style-artwork> 190](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/196.jpeg) ![<bruegel-style-artwork> 191](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/111.jpeg) ![<bruegel-style-artwork> 192](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/64.jpeg) ![<bruegel-style-artwork> 193](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/241.jpeg) ![<bruegel-style-artwork> 194](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/135.jpeg) ![<bruegel-style-artwork> 195](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/3.jpeg) ![<bruegel-style-artwork> 196](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/88.jpeg) ![<bruegel-style-artwork> 197](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/246.jpeg) ![<bruegel-style-artwork> 198](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/13.jpeg) ![<bruegel-style-artwork> 199](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/138.jpeg) ![<bruegel-style-artwork> 200](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/161.jpeg) ![<bruegel-style-artwork> 201](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/15.jpeg) ![<bruegel-style-artwork> 202](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/5.jpeg) ![<bruegel-style-artwork> 203](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/85.jpeg) ![<bruegel-style-artwork> 204](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/81.jpeg) ![<bruegel-style-artwork> 205](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/16.jpeg) ![<bruegel-style-artwork> 206](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/133.jpeg) ![<bruegel-style-artwork> 207](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/206.jpeg) ![<bruegel-style-artwork> 208](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/181.jpeg) ![<bruegel-style-artwork> 209](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/169.jpeg) ![<bruegel-style-artwork> 210](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/189.jpeg) ![<bruegel-style-artwork> 211](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/100.jpeg) ![<bruegel-style-artwork> 212](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/109.jpeg) ![<bruegel-style-artwork> 213](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/200.jpeg) ![<bruegel-style-artwork> 214](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/55.jpeg) ![<bruegel-style-artwork> 215](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/37.jpeg) ![<bruegel-style-artwork> 216](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/130.jpeg) ![<bruegel-style-artwork> 217](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/60.jpeg) ![<bruegel-style-artwork> 218](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/21.jpeg) ![<bruegel-style-artwork> 219](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/219.jpeg) ![<bruegel-style-artwork> 220](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/53.jpeg) ![<bruegel-style-artwork> 221](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/170.jpeg) ![<bruegel-style-artwork> 222](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/31.jpeg) ![<bruegel-style-artwork> 223](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/141.jpeg) ![<bruegel-style-artwork> 224](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/17.jpeg) ![<bruegel-style-artwork> 225](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/23.jpeg) ![<bruegel-style-artwork> 226](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/51.jpeg) ![<bruegel-style-artwork> 227](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/30.jpeg) ![<bruegel-style-artwork> 228](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/243.jpeg) ![<bruegel-style-artwork> 229](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/50.jpeg) ![<bruegel-style-artwork> 230](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/76.jpeg) ![<bruegel-style-artwork> 231](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/179.jpeg) ![<bruegel-style-artwork> 232](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/208.jpeg) ![<bruegel-style-artwork> 233](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/39.jpeg) ![<bruegel-style-artwork> 234](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/29.jpeg) ![<bruegel-style-artwork> 235](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/225.jpeg) ![<bruegel-style-artwork> 236](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/144.jpeg) ![<bruegel-style-artwork> 237](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/238.jpeg) ![<bruegel-style-artwork> 238](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/106.jpeg) ![<bruegel-style-artwork> 239](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/151.jpeg) ![<bruegel-style-artwork> 240](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/41.jpeg) ![<bruegel-style-artwork> 241](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/157.jpeg) ![<bruegel-style-artwork> 242](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/216.jpeg) ![<bruegel-style-artwork> 243](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/56.jpeg) ![<bruegel-style-artwork> 244](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/245.jpeg) ![<bruegel-style-artwork> 245](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/57.jpeg) ![<bruegel-style-artwork> 246](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/160.jpeg) ![<bruegel-style-artwork> 247](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/214.jpeg) ![<bruegel-style-artwork> 248](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/143.jpeg) ![<bruegel-style-artwork> 249](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/67.jpeg) ![<bruegel-style-artwork> 250](https://huggingface.co/sd-concepts-library/painting-made-by-bruegel-v4/resolve/main/concept_images/90.jpeg)
cxyz/mrnlthkrrgl
cxyz
2022-12-17T19:45:55Z
2
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-17T19:32:31Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### mrnlthkrrgl Dreambooth model trained by cxyz with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
dkuznetsov/ppo-LunarLander-v2
dkuznetsov
2022-12-17T19:40:09Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T15:34:49Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 286.41 +/- 19.02 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
howtodowtle/adiaz-1-not-good
howtodowtle
2022-12-17T19:12:46Z
2
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-17T19:09:25Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### adiaz_1_not_good Dreambooth model trained by howtodowtle with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/howtodowtle/adiaz-1-not-good/resolve/main/sample_images/00000-75978624-Linkedin_profil.png)
Sambosis/distilbert-base-uncased-finetuned-squad
Sambosis
2022-12-17T19:01:55Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-12-11T18:37:46Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 2.1904 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6224 | 1.0 | 692 | 1.1812 | | 1.0216 | 2.0 | 1384 | 1.2495 | | 0.5638 | 3.0 | 2076 | 1.3098 | | 0.3679 | 4.0 | 2768 | 1.6784 | | 0.2703 | 5.0 | 3460 | 1.8842 | | 0.1057 | 6.0 | 4152 | 2.1904 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
qanastek/whisper-tiny-french-cased
qanastek
2022-12-17T18:58:39Z
25
3
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "fr", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-17T12:14:38Z
--- language: - fr license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Tiny French Cased results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 fr type: mozilla-foundation/common_voice_11_0 config: fr split: test args: fr metrics: - name: Wer type: wer value: 33.06549172161867 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: google/fleurs fr_fr type: google/fleurs config: fr_fr split: test args: fr_fr metrics: - name: Wer type: wer value: 36.69 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: facebook/voxpopuli fr type: facebook/voxpopuli config: fr split: test args: fr metrics: - name: Wer type: wer value: 32.71 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny French Cased This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_11_0 fr dataset. It achieves the following results on the evaluation set: - Loss: 0.6509 - Wer on `mozilla-foundation/common_voice_11_0` `fr`: 33.0655 - Wer on `google/fleurs` `fr_fr`: 36.69 - Wer on `facebook/voxpopuli` `fr`: 32.71 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.7185 | 0.2 | 1000 | 0.7608 | 38.1636 | | 0.6052 | 1.2 | 2000 | 0.6949 | 34.9513 | | 0.4467 | 2.2 | 3000 | 0.6708 | 34.3393 | | 0.4773 | 3.2 | 4000 | 0.6536 | 33.2102 | | 0.4479 | 4.2 | 5000 | 0.6509 | 33.0655 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
sinsforeal/furuderika
sinsforeal
2022-12-17T18:49:17Z
0
3
null
[ "license:openrail", "region:us" ]
null
2022-12-17T18:05:19Z
--- license: openrail --- Furude Rika from Higurashi No Naku Koro Ni. This is a dreambooth that was trained on 71 768 resolution images using the Webui dreambooth extension. I used batch size 4 and 2 gradient accumulation steps which are the optimal dreambooth settings for my system. You can summon Rika with "furude rika" you can add "school unform", "shrine maiden outfit", "pe uniform" or "sundress" after "furude rika" to see some of her different outfits. "school uniform" ![grid-0454.png](https://s3.amazonaws.com/moonup/production/uploads/1671300945194-63602a9f3605bd411c18b4e0.png) "shrine maiden outfit" ![grid-0455.png](https://s3.amazonaws.com/moonup/production/uploads/1671300980179-63602a9f3605bd411c18b4e0.png) "sundress" ![grid-0456.png](https://s3.amazonaws.com/moonup/production/uploads/1671300999297-63602a9f3605bd411c18b4e0.png) "pe uniform" ![grid-0457.png](https://s3.amazonaws.com/moonup/production/uploads/1671301016039-63602a9f3605bd411c18b4e0.png)
polejowska/vit-vit-base-patch16-224-in21k-eurosat
polejowska
2022-12-17T18:36:59Z
21
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-12-17T17:32:47Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-vit-base-patch16-224-in21k-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.988641975308642 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-vit-base-patch16-224-in21k-eurosat This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0957 - Accuracy: 0.9886 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3303 | 0.99 | 147 | 0.2950 | 0.9790 | | 0.1632 | 1.99 | 294 | 0.1593 | 0.9842 | | 0.1097 | 2.99 | 441 | 0.1223 | 0.9859 | | 0.0868 | 3.99 | 588 | 0.1053 | 0.9877 | | 0.0651 | 4.99 | 735 | 0.0957 | 0.9886 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
dor88/q-FrozenLake-v1-4x4-noSlippery
dor88
2022-12-17T18:35:46Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T18:35:42Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="dor88/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"], is_slippery=False) ```
jackson-lucas/q-Taxi-v3-v2
jackson-lucas
2022-12-17T18:18:13Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T18:11:57Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="jackson-lucas/q-Taxi-v3-v2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Aileenvl/ppo-LunarLander-v2
Aileenvl
2022-12-17T18:17:15Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T18:16:48Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 244.26 +/- 14.26 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sgangireddy/whisper-medium-cv-fleurs-tr-3k
sgangireddy
2022-12-17T18:02:55Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-15T10:41:59Z
--- license: apache-2.0 tags: - whisper-event - generated_from_trainer metrics: - wer model-index: - name: openai/whisper-medium results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openai/whisper-medium This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2406 - Wer: 10.0333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0241 | 1.06 | 1000 | 0.1996 | 10.4543 | | 0.009 | 2.12 | 2000 | 0.2156 | 10.1152 | | 0.0045 | 3.19 | 3000 | 0.2406 | 10.0333 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
jackson-lucas/q-Taxi-v3
jackson-lucas
2022-12-17T17:55:48Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T17:38:20Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="jackson-lucas/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
J4F4N4F/Huggy
J4F4N4F
2022-12-17T17:54:57Z
14
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2022-12-17T17:54:44Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: J4F4N4F/Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
sheldon-spock/q-Taxi-v3
sheldon-spock
2022-12-17T17:53:44Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T17:53:38Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="sheldon-spock/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
sheldon-spock/q-FrozenLake-v1-4x4-noSlippery
sheldon-spock
2022-12-17T17:51:03Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T17:50:05Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="sheldon-spock/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
bnriiitb/whisper-small-te-4k
bnriiitb
2022-12-17T17:49:33Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "te", "dataset:IndicSUPERB_train_validation_splits", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-16T14:47:05Z
--- language: - te license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - IndicSUPERB_train_validation_splits metrics: - wer model-index: - name: Whisper Small Telugu - Naga Budigam results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: IndicSUPERB train and validation splits type: IndicSUPERB train and validation splits config: None split: None args: 'config: te, split: test' metrics: - name: Wer type: wer value: 38.14924740301039 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Telugu - Naga Budigam This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Chai_Bisket_Stories_16-08-2021_14-17 dataset. It achieves the following results on the evaluation set: - Loss: 0.2875 - Wer: 38.1492 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 15000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.2064 | 0.66 | 500 | 0.2053 | 60.1707 | | 0.1399 | 1.33 | 1000 | 0.1535 | 49.3269 | | 0.1093 | 1.99 | 1500 | 0.1365 | 44.5516 | | 0.0771 | 2.66 | 2000 | 0.1316 | 42.1136 | | 0.0508 | 3.32 | 2500 | 0.1395 | 41.1384 | | 0.0498 | 3.99 | 3000 | 0.1386 | 40.5395 | | 0.0302 | 4.65 | 3500 | 0.1529 | 40.9529 | | 0.0157 | 5.32 | 4000 | 0.1719 | 40.6667 | | 0.0183 | 5.98 | 4500 | 0.1723 | 40.3646 | | 0.0083 | 6.65 | 5000 | 0.1911 | 40.4335 | | 0.0061 | 7.31 | 5500 | 0.2109 | 40.4176 | | 0.0055 | 7.98 | 6000 | 0.2075 | 39.7021 | | 0.0039 | 8.64 | 6500 | 0.2186 | 40.2639 | | 0.0026 | 9.31 | 7000 | 0.2254 | 39.1032 | | 0.0035 | 9.97 | 7500 | 0.2289 | 39.2834 | | 0.0016 | 10.64 | 8000 | 0.2332 | 39.1456 | | 0.0016 | 11.3 | 8500 | 0.2395 | 39.4371 | | 0.0016 | 11.97 | 9000 | 0.2447 | 39.2410 | | 0.0009 | 12.63 | 9500 | 0.2548 | 38.7799 | | 0.0008 | 13.3 | 10000 | 0.2551 | 38.7481 | | 0.0008 | 13.96 | 10500 | 0.2621 | 38.8276 | | 0.0007 | 14.63 | 11000 | 0.2633 | 38.6686 | | 0.0003 | 15.29 | 11500 | 0.2711 | 38.4566 | | 0.0005 | 15.96 | 12000 | 0.2772 | 38.7852 | | 0.0001 | 16.62 | 12500 | 0.2771 | 38.2658 | | 0.0001 | 17.29 | 13000 | 0.2808 | 38.2393 | | 0.0001 | 17.95 | 13500 | 0.2815 | 38.1810 | | 0.0 | 18.62 | 14000 | 0.2854 | 38.2022 | | 0.0 | 19.28 | 14500 | 0.2872 | 38.1333 | | 0.0 | 19.95 | 15000 | 0.2875 | 38.1492 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0 - Datasets 2.7.1 - Tokenizers 0.13.2
thiagoabreulima/dqn-SpaceInvadersNoFrameskip-v4
thiagoabreulima
2022-12-17T17:43:35Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T16:16:12Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 6.50 +/- 16.29 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga thiagoabreulima -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga thiagoabreulima -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga thiagoabreulima ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.001), ('learning_starts', 100000), ('n_timesteps', 300000), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Schoolar/ppo-LunarLander-v2
Schoolar
2022-12-17T17:38:39Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T17:38:00Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PP0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -225.45 +/- 28.06 name: mean_reward verified: false --- # **PP0** Agent playing **LunarLander-v2** This is a trained model of a **PP0** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Murdokai/novasessaodidicowe
Murdokai
2022-12-17T17:33:10Z
3
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-17T17:27:29Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### novasessaodidicowe Dreambooth model trained by Murdokai with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
polejowska/vit-convnext-tiny-224-eurosat
polejowska
2022-12-17T17:23:59Z
21
0
transformers
[ "transformers", "pytorch", "convnext", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-12-17T16:34:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-convnext-tiny-224-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9859259259259259 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-convnext-tiny-224-eurosat This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0576 - Accuracy: 0.9859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2881 | 0.99 | 147 | 0.2325 | 0.9588 | | 0.0869 | 1.99 | 294 | 0.0912 | 0.9753 | | 0.0687 | 2.99 | 441 | 0.0663 | 0.9805 | | 0.0272 | 3.99 | 588 | 0.0576 | 0.9859 | | 0.0247 | 4.99 | 735 | 0.0532 | 0.9854 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
RajMoodley/q-Taxi-v3
RajMoodley
2022-12-17T17:22:13Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T17:22:08Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="RajMoodley/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
kuntalcse006/finetuning-sentiment-model-3000-samples
kuntalcse006
2022-12-17T17:17:00Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-17T16:57:22Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8733333333333333 - name: F1 type: f1 value: 0.875 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3093 - Accuracy: 0.8733 - F1: 0.875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
nepp1d0/prot_bert_classification_finetuned_training_script_trial
nepp1d0
2022-12-17T16:52:10Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-17T16:50:22Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: prot_bert_classification_finetuned_training_script_trial results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # prot_bert_classification_finetuned_training_script_trial This model is a fine-tuned version of [nepp1d0/prot_bert-finetuned-smiles-bindingDB](https://huggingface.co/nepp1d0/prot_bert-finetuned-smiles-bindingDB) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6847 - Accuracy: 0.86 - F1: 0.9247 - Precision: 1.0 - Recall: 0.86 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 3 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.6929 | 1.0 | 25 | 0.6847 | 0.86 | 0.9247 | 1.0 | 0.86 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
Alfred5347/aaa
Alfred5347
2022-12-17T16:47:13Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2022-12-17T16:47:12Z
--- license: bigscience-openrail-m ---
glenn2/q-FrozenLake-v1-4x4-noSlippery
glenn2
2022-12-17T16:25:51Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T16:25:45Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="glenn2/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
polejowska/swin-tiny-patch4-window7-224-eurosat
polejowska
2022-12-17T16:13:17Z
59
0
transformers
[ "transformers", "pytorch", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-12-11T12:51:50Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9851851851851852 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0447 - Accuracy: 0.9852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1547 | 0.99 | 147 | 0.0956 | 0.9711 | | 0.0707 | 1.99 | 294 | 0.0759 | 0.9733 | | 0.0537 | 2.99 | 441 | 0.0680 | 0.9768 | | 0.0302 | 3.99 | 588 | 0.0447 | 0.9852 | | 0.0225 | 4.99 | 735 | 0.0489 | 0.9837 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
kontogiorgos/testpyramidsrnd
kontogiorgos
2022-12-17T16:12:41Z
7
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2022-12-17T16:12:34Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: kontogiorgos/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
troesy/toxicBERT-params-tryout
troesy
2022-12-17T16:01:36Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-12-17T15:46:48Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: toxicBERT-params-tryout results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # toxicBERT-params-tryout This model is a fine-tuned version of [unitary/toxic-bert](https://huggingface.co/unitary/toxic-bert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1804 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.9314 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.15 - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 22 | 0.4522 | 0.0 | 0.0 | 0.0 | 0.8795 | | No log | 2.0 | 44 | 0.2784 | 0.0 | 0.0 | 0.0 | 0.8996 | | No log | 3.0 | 66 | 0.2150 | 0.0 | 0.0 | 0.0 | 0.9219 | | No log | 4.0 | 88 | 0.1888 | 0.0 | 0.0 | 0.0 | 0.9297 | | No log | 5.0 | 110 | 0.1829 | 0.0 | 0.0 | 0.0 | 0.9303 | | No log | 6.0 | 132 | 0.1810 | 0.0 | 0.0 | 0.0 | 0.9305 | | No log | 7.0 | 154 | 0.1804 | 0.0 | 0.0 | 0.0 | 0.9314 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
zigg-ai/d5cf4e49-bdba-434c-b8ed-1efdb5941486
zigg-ai
2022-12-17T15:44:38Z
1
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-17T15:26:37Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: sdcid --- ### training params ```json { "pretrained_model_name_or_path": "multimodalart/sd-fine-tunable", "instance_data_dir": "./d5cf4e49-bdba-434c-b8ed-1efdb5941486/instance_data", "class_data_dir": "./class_data/class", "output_dir": "./d5cf4e49-bdba-434c-b8ed-1efdb5941486/", "train_text_encoder": true, "with_prior_preservation": false, "prior_loss_weight": 1.0, "instance_prompt": "sdcid", "class_prompt": "", "resolution": 512, "train_batch_size": 1, "gradient_accumulation_steps": 1, "gradient_checkpointing": true, "use_8bit_adam": true, "learning_rate": 4e-06, "lr_scheduler": "polynomial", "lr_warmup_steps": 0, "num_class_images": 200, "max_train_steps": 1050, "mixed_precision": "fp16" } ```
Rubiksman23/ppo-LunarLander-v2
Rubiksman23
2022-12-17T15:43:14Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T15:42:52Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 255.08 +/- 13.11 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LuniLand/q-Taxi-v3
LuniLand
2022-12-17T15:42:46Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T15:42:15Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="LuniLand/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
SCD28/camembert-ner
SCD28
2022-12-17T15:40:39Z
10
0
transformers
[ "transformers", "pytorch", "camembert", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-12-17T12:59:58Z
--- license: mit tags: - generated_from_trainer model-index: - name: camembert-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # camembert-ner This model is a fine-tuned version of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1179 - Overall Precision: 0.7367 - Overall Recall: 0.7522 - Overall F1: 0.7444 - Overall Accuracy: 0.9728 - Humanprod F1: 0.1639 - Loc F1: 0.7657 - Org F1: 0.5352 - Per F1: 0.7961 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Humanprod F1 | Loc F1 | Org F1 | Per F1 | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:------------:|:------:|:------:|:------:| | No log | 1.0 | 307 | 0.1254 | 0.7185 | 0.7420 | 0.7300 | 0.9715 | 0.0357 | 0.7579 | 0.5052 | 0.7778 | | 0.1195 | 2.0 | 614 | 0.1179 | 0.7367 | 0.7522 | 0.7444 | 0.9728 | 0.1639 | 0.7657 | 0.5352 | 0.7961 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.7.1+cpu - Datasets 2.7.1 - Tokenizers 0.13.2
kfahn/Taxi-v3
kfahn
2022-12-17T15:23:33Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T15:23:22Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="kfahn/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
messham/LunarLander_Course2
messham
2022-12-17T15:18:44Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T15:18:17Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 259.86 +/- 20.76 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Scrya/whisper-medium-ms
Scrya
2022-12-17T15:08:05Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "ms", "dataset:google/fleurs", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-16T11:39:38Z
--- language: - ms license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - google/fleurs model-index: - name: Whisper Medium MS - FLEURS results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: google/fleurs type: google/fleurs config: ms_my split: test metrics: - type: wer value: 11.75 name: WER - type: cer value: 3.49 name: CER --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Medium MS - FLEURS This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the FLEURS dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2941 - eval_wer: 10.2 - eval_runtime: 954.9 - eval_samples_per_second: 0.784 - eval_steps_per_second: 0.049 - epoch: 53.2 - step: 5000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 1 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
kfahn/q-FrozenLake-v1
kfahn
2022-12-17T15:04:59Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T15:04:48Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="kfahn/q-FrozenLake-v1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
marianna13/xlm-roberta-fine-tuned-on-russian-abusive-language
marianna13
2022-12-17T14:43:40Z
38
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "abusive text classification", "ru", "en", "dataset:AbusiveLanguageDataset", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T11:59:11Z
--- language: - ru - en tags: - abusive text classification license: "apache-2.0" datasets: - AbusiveLanguageDataset --- ```py from transformers import pipeline model_path = 'marianna13/xlm-roberta-fine-tuned-on-russian-abusive-language' id2label = { 0:'неопасный тескт', 1:'опасный тескт' } label2id = { 'неопасный тескт':0, 'опасный тескт':1 } config = AutoConfig.from_pretrained(model_path, id2label=id2label, label2id=label2id) tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForSequenceClassification.from_pretrained(model_path, config=config) text = "Прекрасный день." pipe = pipeline('text-classification', model=model, tokenizer=tokenizer) pipe(text) ``` ```json [{'label': 'неопасный текcт', 'score': 0.9249424934387207}] ```
anuragshas/whisper-large-v2-hy
anuragshas
2022-12-17T14:26:43Z
27
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "hy", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-17T11:27:58Z
--- language: - hy license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Large-v2 Armenian results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 hy-AM type: mozilla-foundation/common_voice_11_0 config: hy-AM split: test args: hy-AM metrics: - name: Wer type: wer value: 40.23026315789473 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large-v2 Armenian This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 hy-AM dataset. It achieves the following results on the evaluation set: - Loss: 0.4429 - Wer: 40.2303 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0113 | 8.02 | 200 | 0.3501 | 43.7171 | | 0.0003 | 17.01 | 400 | 0.3989 | 40.7895 | | 0.0001 | 26.0 | 600 | 0.4282 | 40.4605 | | 0.0001 | 34.02 | 800 | 0.4392 | 40.2632 | | 0.0001 | 43.01 | 1000 | 0.4429 | 40.2303 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
thiagoabreulima/lunarlander
thiagoabreulima
2022-12-17T14:23:11Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T14:21:58Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 141.70 +/- 56.56 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
gabrielgcbs/dqn-SpaceInvadersNoFrameskip-v4
gabrielgcbs
2022-12-17T13:51:25Z
1
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-17T13:50:54Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 374.00 +/- 214.89 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gabrielgcbs -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gabrielgcbs -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga gabrielgcbs ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.01), ('learning_starts', 100000), ('n_timesteps', 300000), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```