modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-26 00:41:36
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
496 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-26 00:41:32
card
stringlengths
11
1.01M
DaniyalMufti/q-FrozenLake-v1-4x4-noSlippery
DaniyalMufti
2023-01-09T13:53:26Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T13:18:18Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="AxlDM124/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
xianbao/dashdash-toy-heywhale
xianbao
2023-01-09T13:40:56Z
31
1
diffusers
[ "diffusers", "text-to-image", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-09T13:28:39Z
--- tags: - text-to-image ---
muhtasham/small-vanilla-target-glue-cola
muhtasham
2023-01-09T13:08:10Z
106
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T12:28:43Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: small-vanilla-target-glue-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-vanilla-target-glue-cola This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3381 - Matthews Correlation: 0.3994 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5491 | 1.87 | 500 | 0.6232 | 0.2182 | | 0.3596 | 3.73 | 1000 | 0.7203 | 0.3078 | | 0.233 | 5.6 | 1500 | 0.7825 | 0.3833 | | 0.168 | 7.46 | 2000 | 0.9239 | 0.3657 | | 0.1299 | 9.33 | 2500 | 1.1005 | 0.4196 | | 0.1085 | 11.19 | 3000 | 1.2032 | 0.3906 | | 0.0931 | 13.06 | 3500 | 1.3157 | 0.3226 | | 0.0766 | 14.93 | 4000 | 1.3381 | 0.3994 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
KJIM/kobigbird-base30-73567294
KJIM
2023-01-09T12:43:57Z
90
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-09T07:11:38Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-base30-73567294 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-base30-73567294 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.3291 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 30 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 42 | 1.4308 | | No log | 1.99 | 84 | 1.3291 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
brand25/ppo-Huggy
brand25
2023-01-09T12:43:33Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-01-09T12:43:24Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: brand25/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
sd-dreambooth-library/riffusion-dragonfriction-tequila
sd-dreambooth-library
2023-01-09T12:41:50Z
31
0
diffusers
[ "diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-09T12:40:57Z
--- license: creativeml-openrail-m tags: - text-to-image --- ### riffusion_dragonfriction-tequila on Stable Diffusion via Dreambooth #### model by ololo123 This your the Stable Diffusion model fine-tuned the riffusion_dragonfriction-tequila concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **dragonfriction** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/riffusion-dragonfriction-tequila/resolve/main/concept_images/12.png) ![image 1](https://huggingface.co/sd-dreambooth-library/riffusion-dragonfriction-tequila/resolve/main/concept_images/6.png) ![image 2](https://huggingface.co/sd-dreambooth-library/riffusion-dragonfriction-tequila/resolve/main/concept_images/7.png) ![image 3](https://huggingface.co/sd-dreambooth-library/riffusion-dragonfriction-tequila/resolve/main/concept_images/8.png) ![image 4](https://huggingface.co/sd-dreambooth-library/riffusion-dragonfriction-tequila/resolve/main/concept_images/1.png) ![image 5](https://huggingface.co/sd-dreambooth-library/riffusion-dragonfriction-tequila/resolve/main/concept_images/10.png) ![image 6](https://huggingface.co/sd-dreambooth-library/riffusion-dragonfriction-tequila/resolve/main/concept_images/5.png) ![image 7](https://huggingface.co/sd-dreambooth-library/riffusion-dragonfriction-tequila/resolve/main/concept_images/9.png) ![image 8](https://huggingface.co/sd-dreambooth-library/riffusion-dragonfriction-tequila/resolve/main/concept_images/13.png) ![image 9](https://huggingface.co/sd-dreambooth-library/riffusion-dragonfriction-tequila/resolve/main/concept_images/2.png) ![image 10](https://huggingface.co/sd-dreambooth-library/riffusion-dragonfriction-tequila/resolve/main/concept_images/3.png) ![image 11](https://huggingface.co/sd-dreambooth-library/riffusion-dragonfriction-tequila/resolve/main/concept_images/4.png) ![image 12](https://huggingface.co/sd-dreambooth-library/riffusion-dragonfriction-tequila/resolve/main/concept_images/11.png) ![image 13](https://huggingface.co/sd-dreambooth-library/riffusion-dragonfriction-tequila/resolve/main/concept_images/15.png) ![image 14](https://huggingface.co/sd-dreambooth-library/riffusion-dragonfriction-tequila/resolve/main/concept_images/14.png)
wxcvbnw/havrans
wxcvbnw
2023-01-09T12:29:04Z
29
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-09T12:18:23Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### havrans Dreambooth model trained by wxcvbnw with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
yuch0001/pokemon
yuch0001
2023-01-09T11:54:13Z
4
1
diffusers
[ "diffusers", "tensorboard", "en", "dataset:lambdalabs/pokemon-blip-captions", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2023-01-09T10:48:05Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: lambdalabs/pokemon-blip-captions metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # pokemon ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `lambdalabs/pokemon-blip-captions` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/yuch0001/pokemon/tensorboard?#scalars)
muhtasham/tiny-mlm-glue-stsb-target-glue-mrpc
muhtasham
2023-01-09T11:45:56Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T11:39:31Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: tiny-mlm-glue-stsb-target-glue-mrpc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-stsb-target-glue-mrpc This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-stsb](https://huggingface.co/muhtasham/tiny-mlm-glue-stsb) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2364 - Accuracy: 0.7132 - F1: 0.8047 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5901 | 4.35 | 500 | 0.5567 | 0.7108 | 0.8072 | | 0.4581 | 8.7 | 1000 | 0.5798 | 0.7377 | 0.8283 | | 0.3115 | 13.04 | 1500 | 0.6576 | 0.7426 | 0.8247 | | 0.197 | 17.39 | 2000 | 0.7977 | 0.7255 | 0.8152 | | 0.1153 | 21.74 | 2500 | 1.0637 | 0.7059 | 0.7973 | | 0.0843 | 26.09 | 3000 | 1.2364 | 0.7132 | 0.8047 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
vinayak361/token_fine_tunned_flipkart_2_gl7
vinayak361
2023-01-09T11:34:50Z
117
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-01-05T09:41:22Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: token_fine_tunned_flipkart_2_gl7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # token_fine_tunned_flipkart_2_gl7 This model is a fine-tuned version of [vinayak361/token_fine_tunned_flipkart_2_gl](https://huggingface.co/vinayak361/token_fine_tunned_flipkart_2_gl) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7179 - Precision: 0.7122 - Recall: 0.7571 - F1: 0.7340 - Accuracy: 0.7485 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 135 | 1.0392 | 0.6634 | 0.7121 | 0.6869 | 0.6982 | | No log | 2.0 | 270 | 0.8567 | 0.6697 | 0.7128 | 0.6906 | 0.7093 | | No log | 3.0 | 405 | 0.8102 | 0.6707 | 0.7204 | 0.6947 | 0.7146 | | 0.9223 | 4.0 | 540 | 0.7840 | 0.6860 | 0.7363 | 0.7103 | 0.7253 | | 0.9223 | 5.0 | 675 | 0.7668 | 0.6886 | 0.7301 | 0.7088 | 0.7267 | | 0.9223 | 6.0 | 810 | 0.7543 | 0.6886 | 0.7329 | 0.7100 | 0.7301 | | 0.9223 | 7.0 | 945 | 0.7501 | 0.6997 | 0.7384 | 0.7185 | 0.7340 | | 0.708 | 8.0 | 1080 | 0.7383 | 0.6949 | 0.7426 | 0.7180 | 0.7335 | | 0.708 | 9.0 | 1215 | 0.7360 | 0.7030 | 0.7453 | 0.7235 | 0.7379 | | 0.708 | 10.0 | 1350 | 0.7319 | 0.7048 | 0.7453 | 0.7245 | 0.7389 | | 0.708 | 11.0 | 1485 | 0.7306 | 0.7052 | 0.7467 | 0.7254 | 0.7398 | | 0.6327 | 12.0 | 1620 | 0.7220 | 0.7049 | 0.7488 | 0.7262 | 0.7413 | | 0.6327 | 13.0 | 1755 | 0.7198 | 0.7059 | 0.7509 | 0.7277 | 0.7432 | | 0.6327 | 14.0 | 1890 | 0.7203 | 0.7108 | 0.7585 | 0.7338 | 0.7481 | | 0.5954 | 15.0 | 2025 | 0.7193 | 0.7118 | 0.7571 | 0.7337 | 0.7481 | | 0.5954 | 16.0 | 2160 | 0.7175 | 0.7122 | 0.7585 | 0.7346 | 0.7476 | | 0.5954 | 17.0 | 2295 | 0.7176 | 0.7144 | 0.7599 | 0.7364 | 0.7481 | | 0.5954 | 18.0 | 2430 | 0.7183 | 0.7153 | 0.7599 | 0.7369 | 0.7490 | | 0.5699 | 19.0 | 2565 | 0.7173 | 0.7122 | 0.7571 | 0.7340 | 0.7485 | | 0.5699 | 20.0 | 2700 | 0.7179 | 0.7122 | 0.7571 | 0.7340 | 0.7485 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu102 - Datasets 2.2.2 - Tokenizers 0.12.1
lixiqi/beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-5e-05
lixiqi
2023-01-09T11:26:49Z
174
0
transformers
[ "transformers", "pytorch", "tensorboard", "beit", "image-classification", "generated_from_trainer", "dataset:image_folder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-01-09T10:43:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - image_folder metrics: - accuracy model-index: - name: beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-5e-05 results: - task: name: Image Classification type: image-classification dataset: name: image_folder type: image_folder args: default metrics: - name: Accuracy type: accuracy value: 0.6833379771524102 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-5e-05 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.8610 - Accuracy: 0.6833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1691 | 1.0 | 224 | 0.9764 | 0.6310 | | 1.0304 | 2.0 | 448 | 0.8965 | 0.6666 | | 0.9844 | 3.0 | 672 | 0.8610 | 0.6833 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
toinsson/Reinforce-cartpole-0
toinsson
2023-01-09T11:11:20Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T11:11:09Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole-0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 488.40 +/- 34.80 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
YuJungSoo/kobigbird-base26-46196128
YuJungSoo
2023-01-09T11:00:47Z
90
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-09T10:08:47Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-base26-46196128 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-base26-46196128 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.4533 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 26 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 42 | 1.8458 | | No log | 1.99 | 84 | 1.4533 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
BiggieW/classification_chnsenticorp_eda_aug
BiggieW
2023-01-09T10:57:00Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T09:55:45Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: classification_chnsenticorp_eda_aug results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # classification_chnsenticorp_eda_aug This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7802 - Accuracy: 0.55 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4849 | 1.0 | 20 | 0.6880 | 0.4 | | 0.0979 | 2.0 | 40 | 0.8746 | 0.6 | | 0.0238 | 3.0 | 60 | 0.7802 | 0.55 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-sst2-target-glue-stsb
muhtasham
2023-01-09T10:55:52Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T10:43:36Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - spearmanr model-index: - name: tiny-mlm-glue-sst2-target-glue-stsb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-sst2-target-glue-stsb This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-sst2](https://huggingface.co/muhtasham/tiny-mlm-glue-sst2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9195 - Pearson: 0.8130 - Spearmanr: 0.8114 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | 2.7776 | 2.78 | 500 | 1.1238 | 0.7313 | 0.7669 | | 0.932 | 5.56 | 1000 | 1.0628 | 0.7833 | 0.8086 | | 0.737 | 8.33 | 1500 | 1.0050 | 0.8025 | 0.8208 | | 0.6099 | 11.11 | 2000 | 0.8592 | 0.8165 | 0.8220 | | 0.5164 | 13.89 | 2500 | 0.8875 | 0.8158 | 0.8181 | | 0.4659 | 16.67 | 3000 | 0.9524 | 0.8155 | 0.8198 | | 0.4114 | 19.44 | 3500 | 0.8872 | 0.8173 | 0.8174 | | 0.3728 | 22.22 | 4000 | 0.9423 | 0.8163 | 0.8166 | | 0.3396 | 25.0 | 4500 | 0.9953 | 0.8197 | 0.8202 | | 0.321 | 27.78 | 5000 | 0.9409 | 0.8160 | 0.8160 | | 0.3034 | 30.56 | 5500 | 0.9273 | 0.8142 | 0.8139 | | 0.2811 | 33.33 | 6000 | 0.9195 | 0.8130 | 0.8114 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
dustofappearan/Dispoa
dustofappearan
2023-01-09T10:52:29Z
0
0
diffusers
[ "diffusers", "en", "dataset:nateraw/midjourney-texttoimage", "region:us" ]
null
2023-01-09T10:51:14Z
--- datasets: - nateraw/midjourney-texttoimage language: - en library_name: diffusers ---
KJIM/kobigbird-base21-97861855
KJIM
2023-01-09T10:41:12Z
90
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-09T09:55:21Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-base21-97861855 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-base21-97861855 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.3456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 21 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 42 | 2.1518 | | No log | 1.99 | 84 | 1.3456 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
lixiqi/beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-9e-05
lixiqi
2023-01-09T10:37:20Z
176
0
transformers
[ "transformers", "pytorch", "tensorboard", "beit", "image-classification", "generated_from_trainer", "dataset:image_folder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-01-08T20:18:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - image_folder metrics: - accuracy model-index: - name: beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-9e-05 results: - task: name: Image Classification type: image-classification dataset: name: image_folder type: image_folder args: default metrics: - name: Accuracy type: accuracy value: 0.6840345500139314 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-9e-05 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.8481 - Accuracy: 0.6840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1839 | 1.0 | 224 | 1.0266 | 0.6120 | | 1.0333 | 2.0 | 448 | 0.9063 | 0.6608 | | 0.9655 | 3.0 | 672 | 0.8481 | 0.6840 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
roscazo/CTEBMSP_ANAT_DISO
roscazo
2023-01-09T10:27:00Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-01-09T08:48:42Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: CTEBMSP_ANAT_DISO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CTEBMSP_ANAT_DISO This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0909 - Anat Precision: 0.7522 - Anat Recall: 0.7147 - Anat F1: 0.7330 - Anat Number: 361 - Diso Precision: 0.8915 - Diso Recall: 0.8919 - Diso F1: 0.8917 - Diso Number: 2645 - Overall Precision: 0.8755 - Overall Recall: 0.8706 - Overall F1: 0.8731 - Overall Accuracy: 0.9873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Anat Precision | Anat Recall | Anat F1 | Anat Number | Diso Precision | Diso Recall | Diso F1 | Diso Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:--------------:|:-----------:|:-------:|:-----------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.0592 | 1.0 | 2133 | 0.0506 | 0.6950 | 0.4986 | 0.5806 | 361 | 0.8635 | 0.8609 | 0.8622 | 2645 | 0.8484 | 0.8174 | 0.8326 | 0.9843 | | 0.0323 | 2.0 | 4266 | 0.0583 | 0.7899 | 0.6039 | 0.6845 | 361 | 0.8780 | 0.8817 | 0.8798 | 2645 | 0.8697 | 0.8483 | 0.8589 | 0.9858 | | 0.0201 | 3.0 | 6399 | 0.0580 | 0.6565 | 0.7147 | 0.6844 | 361 | 0.8598 | 0.8764 | 0.8680 | 2645 | 0.8339 | 0.8570 | 0.8453 | 0.9851 | | 0.0121 | 4.0 | 8532 | 0.0758 | 0.7240 | 0.6759 | 0.6991 | 361 | 0.8976 | 0.8752 | 0.8863 | 2645 | 0.8776 | 0.8513 | 0.8642 | 0.9863 | | 0.0078 | 5.0 | 10665 | 0.0814 | 0.7219 | 0.7119 | 0.7169 | 361 | 0.8776 | 0.8975 | 0.8875 | 2645 | 0.8595 | 0.8752 | 0.8673 | 0.9862 | | 0.0031 | 6.0 | 12798 | 0.0974 | 0.7599 | 0.6399 | 0.6947 | 361 | 0.8895 | 0.8915 | 0.8905 | 2645 | 0.8761 | 0.8613 | 0.8686 | 0.9867 | | 0.002 | 7.0 | 14931 | 0.0980 | 0.7143 | 0.6787 | 0.6960 | 361 | 0.8813 | 0.8957 | 0.8884 | 2645 | 0.8624 | 0.8696 | 0.8660 | 0.9860 | | 0.0005 | 8.0 | 17064 | 0.0909 | 0.7522 | 0.7147 | 0.7330 | 361 | 0.8915 | 0.8919 | 0.8917 | 2645 | 0.8755 | 0.8706 | 0.8731 | 0.9873 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-sst2-target-glue-rte
muhtasham
2023-01-09T10:24:09Z
103
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T10:18:38Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-sst2-target-glue-rte results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-sst2-target-glue-rte This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-sst2](https://huggingface.co/muhtasham/tiny-mlm-glue-sst2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5470 - Accuracy: 0.6065 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6398 | 6.41 | 500 | 0.6742 | 0.5993 | | 0.437 | 12.82 | 1000 | 0.8177 | 0.6318 | | 0.2692 | 19.23 | 1500 | 1.0300 | 0.6137 | | 0.1609 | 25.64 | 2000 | 1.2420 | 0.6137 | | 0.1 | 32.05 | 2500 | 1.5470 | 0.6065 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
sudong97/kobigbird-base23-84859751
sudong97
2023-01-09T10:23:26Z
90
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-09T09:38:27Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-base23-84859751 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-base23-84859751 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.4628 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 23 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 42 | 1.6141 | | No log | 1.99 | 84 | 1.4628 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
rohitp1/libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers-loss-att-take-2
rohitp1
2023-01-09T10:22:07Z
103
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-01-09T04:36:47Z
--- tags: - generated_from_trainer metrics: - wer model-index: - name: libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers-loss-att-take-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers-loss-att-take-2 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 26.4101 - Wer: 0.2791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 202.4293 | 0.45 | 200 | 26.7777 | 0.2779 | | 197.6471 | 0.9 | 400 | 25.8300 | 0.2760 | | 204.8931 | 1.35 | 600 | 25.6774 | 0.2747 | | 193.3182 | 1.79 | 800 | 25.6049 | 0.2737 | | 205.2241 | 2.24 | 1000 | 25.5552 | 0.2739 | | 186.0407 | 2.69 | 1200 | 25.4364 | 0.2737 | | 191.7055 | 3.14 | 1400 | 25.7949 | 0.2764 | | 185.0721 | 3.59 | 1600 | 26.1202 | 0.2753 | | 198.8579 | 4.04 | 1800 | 25.8496 | 0.2763 | | 185.7877 | 4.48 | 2000 | 27.0753 | 0.2731 | | 194.9394 | 4.93 | 2200 | 25.6920 | 0.2775 | | 188.2296 | 5.38 | 2400 | 25.7362 | 0.2742 | | 188.0202 | 5.83 | 2600 | 25.9170 | 0.2755 | | 191.5541 | 6.28 | 2800 | 26.8590 | 0.2771 | | 198.2817 | 6.73 | 3000 | 26.4101 | 0.2791 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1 - Datasets 2.7.0 - Tokenizers 0.11.0
TransLL/bert-base-uncased-issues-128
TransLL
2023-01-09T10:18:48Z
106
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-01-09T09:08:09Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-issues-128 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0986 | 1.0 | 291 | 1.6929 | | 1.6401 | 2.0 | 582 | 1.4304 | | 1.4881 | 3.0 | 873 | 1.3916 | | 1.4 | 4.0 | 1164 | 1.3796 | | 1.3416 | 5.0 | 1455 | 1.2012 | | 1.2807 | 6.0 | 1746 | 1.2733 | | 1.2396 | 7.0 | 2037 | 1.2646 | | 1.1993 | 8.0 | 2328 | 1.2098 | | 1.1661 | 9.0 | 2619 | 1.1862 | | 1.1406 | 10.0 | 2910 | 1.2223 | | 1.1294 | 11.0 | 3201 | 1.2056 | | 1.1042 | 12.0 | 3492 | 1.1655 | | 1.0827 | 13.0 | 3783 | 1.2525 | | 1.0738 | 14.0 | 4074 | 1.1685 | | 1.0626 | 15.0 | 4365 | 1.1182 | | 1.0629 | 16.0 | 4656 | 1.2456 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-sst2-target-glue-qqp
muhtasham
2023-01-09T10:16:48Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T09:23:29Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: tiny-mlm-glue-sst2-target-glue-qqp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-sst2-target-glue-qqp This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-sst2](https://huggingface.co/muhtasham/tiny-mlm-glue-sst2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4117 - Accuracy: 0.7972 - F1: 0.7705 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.578 | 0.04 | 500 | 0.5173 | 0.7295 | 0.6786 | | 0.5102 | 0.09 | 1000 | 0.4813 | 0.7532 | 0.7023 | | 0.4981 | 0.13 | 1500 | 0.4910 | 0.7409 | 0.7150 | | 0.4808 | 0.18 | 2000 | 0.4655 | 0.7558 | 0.7214 | | 0.4728 | 0.22 | 2500 | 0.4552 | 0.7634 | 0.7282 | | 0.4557 | 0.26 | 3000 | 0.4475 | 0.7693 | 0.7353 | | 0.4577 | 0.31 | 3500 | 0.4464 | 0.7690 | 0.7379 | | 0.4507 | 0.35 | 4000 | 0.4495 | 0.7670 | 0.7397 | | 0.4511 | 0.4 | 4500 | 0.4409 | 0.7721 | 0.7437 | | 0.4414 | 0.44 | 5000 | 0.4189 | 0.7903 | 0.7499 | | 0.4291 | 0.48 | 5500 | 0.4267 | 0.7838 | 0.7510 | | 0.431 | 0.53 | 6000 | 0.4064 | 0.8005 | 0.7566 | | 0.4236 | 0.57 | 6500 | 0.4161 | 0.7930 | 0.7573 | | 0.4258 | 0.62 | 7000 | 0.4038 | 0.8030 | 0.7608 | | 0.4167 | 0.66 | 7500 | 0.4066 | 0.8041 | 0.7648 | | 0.4312 | 0.7 | 8000 | 0.4111 | 0.7966 | 0.7621 | | 0.4203 | 0.75 | 8500 | 0.3971 | 0.8068 | 0.7671 | | 0.4143 | 0.79 | 9000 | 0.4187 | 0.7894 | 0.7613 | | 0.4115 | 0.84 | 9500 | 0.3884 | 0.8127 | 0.7688 | | 0.4133 | 0.88 | 10000 | 0.3849 | 0.8172 | 0.7731 | | 0.4091 | 0.92 | 10500 | 0.3826 | 0.8178 | 0.7725 | | 0.4085 | 0.97 | 11000 | 0.3832 | 0.8186 | 0.7723 | | 0.4066 | 1.01 | 11500 | 0.4000 | 0.8039 | 0.7711 | | 0.3859 | 1.06 | 12000 | 0.3798 | 0.8195 | 0.7758 | | 0.3955 | 1.1 | 12500 | 0.3835 | 0.8159 | 0.7781 | | 0.3833 | 1.14 | 13000 | 0.3872 | 0.8138 | 0.7764 | | 0.3722 | 1.19 | 13500 | 0.4117 | 0.7972 | 0.7705 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
Shobhank-iiitdwd/BERT-L-QA
Shobhank-iiitdwd
2023-01-09T10:06:34Z
108
0
transformers
[ "transformers", "pytorch", "jax", "bert", "question-answering", "en", "dataset:squad_v2", "license:cc-by-4.0", "model-index", "endpoints_compatible", "region:us" ]
question-answering
2023-01-09T09:54:33Z
--- language: en license: cc-by-4.0 datasets: - squad_v2 model-index: - name: deepset/bert-large-uncased-whole-word-masking-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - type: exact_match value: 80.8846 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2E5ZGNkY2ExZWViZGEwNWE3OGRmMWM2ZmE4ZDU4ZDQ1OGM3ZWE0NTVmZjFmYmZjZmJmNjJmYTc3NTM3OTk3OSIsInZlcnNpb24iOjF9.aSblF4ywh1fnHHrN6UGL392R5KLaH3FCKQlpiXo_EdQ4XXEAENUCjYm9HWDiFsgfSENL35GkbSyz_GAhnefsAQ - type: f1 value: 83.8765 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGFlNmEzMTk2NjRkNTI3ZTk3ZTU1NWNlYzIyN2E0ZDFlNDA2ZjYwZWJlNThkMmRmMmE0YzcwYjIyZDM5NmRiMCIsInZlcnNpb24iOjF9.-rc2_Bsp_B26-o12MFYuAU0Ad2Hg9PDx7Preuk27WlhYJDeKeEr32CW8LLANQABR3Mhw2x8uTYkEUrSDMxxLBw --- # bert-large-uncased-whole-word-masking-squad2 This is a berta-large model, fine-tuned using the SQuAD2.0 dataset for the task of question answering. ## Overview **Language model:** bert-large **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system) ## Usage ### In Haystack Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/bert-large-uncased-whole-word-masking-squad2") # or reader = TransformersReader(model_name_or_path="FILL",tokenizer="deepset/bert-large-uncased-whole-word-masking-squad2") ``` ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/bert-large-uncased-whole-word-masking-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
nc33/T5_finetuned
nc33
2023-01-09T09:47:09Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:super_glue", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-01-09T04:38:33Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - super_glue metrics: - rouge model-index: - name: T5_finetuned results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: super_glue type: super_glue config: boolq split: train args: boolq metrics: - name: Rouge1 type: rouge value: 79.3272 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # T5_finetuned This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the super_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.1077 - Rouge1: 79.3272 - Rouge2: 0.0 - Rougel: 79.2966 - Rougelsum: 79.3272 - Gen Len: 2.8269 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 0.5134 | 1.0 | 590 | 0.1102 | 79.8165 | 0.0 | 79.8165 | 79.8471 | 2.7713 | | 0.105 | 2.0 | 1180 | 0.1049 | 80.3364 | 0.0 | 80.3364 | 80.367 | 2.6483 | | 0.1023 | 3.0 | 1770 | 0.1077 | 79.3272 | 0.0 | 79.2966 | 79.3272 | 2.8269 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
KJIM/kobigbird-pure50-8977015
KJIM
2023-01-09T09:34:26Z
92
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-09T09:09:05Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-pure50-8977015 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-pure50-8977015 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.2394 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 50 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 42 | 1.8128 | | No log | 1.99 | 84 | 1.2394 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
ilhkn/my-awesome-setfit-model1
ilhkn
2023-01-09T09:21:34Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-01-09T09:21:15Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 40 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 40, "warmup_steps": 4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
susooo/kobigbird-base27-63168558
susooo
2023-01-09T09:12:33Z
91
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-09T05:46:16Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-base27-63168558 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-base27-63168558 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.3353 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 27 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 42 | 1.3859 | | No log | 1.99 | 84 | 1.3353 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
bdiptesh99/rl-ql-Taxi-v3
bdiptesh99
2023-01-09T09:00:38Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T07:22:18Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: rl-ql-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="bdiptesh99/rl-ql-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
muhtasham/tiny-mlm-glue-sst2-target-glue-mnli
muhtasham
2023-01-09T09:00:24Z
108
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T08:28:58Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-sst2-target-glue-mnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-sst2-target-glue-mnli This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-sst2](https://huggingface.co/muhtasham/tiny-mlm-glue-sst2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7870 - Accuracy: 0.6519 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.076 | 0.04 | 500 | 1.0342 | 0.4657 | | 1.0114 | 0.08 | 1000 | 0.9714 | 0.5393 | | 0.9654 | 0.12 | 1500 | 0.9268 | 0.5736 | | 0.9381 | 0.16 | 2000 | 0.9120 | 0.5849 | | 0.9266 | 0.2 | 2500 | 0.8942 | 0.5953 | | 0.9171 | 0.24 | 3000 | 0.8783 | 0.6014 | | 0.9009 | 0.29 | 3500 | 0.8687 | 0.6085 | | 0.8932 | 0.33 | 4000 | 0.8567 | 0.6191 | | 0.8767 | 0.37 | 4500 | 0.8524 | 0.6171 | | 0.8768 | 0.41 | 5000 | 0.8436 | 0.6231 | | 0.8702 | 0.45 | 5500 | 0.8374 | 0.6220 | | 0.8673 | 0.49 | 6000 | 0.8345 | 0.6271 | | 0.8684 | 0.53 | 6500 | 0.8274 | 0.6274 | | 0.8606 | 0.57 | 7000 | 0.8282 | 0.6298 | | 0.8528 | 0.61 | 7500 | 0.8146 | 0.6363 | | 0.8529 | 0.65 | 8000 | 0.8103 | 0.6406 | | 0.8467 | 0.69 | 8500 | 0.8237 | 0.6320 | | 0.8478 | 0.73 | 9000 | 0.7964 | 0.6473 | | 0.8399 | 0.77 | 9500 | 0.8081 | 0.6391 | | 0.8295 | 0.81 | 10000 | 0.7954 | 0.6475 | | 0.833 | 0.86 | 10500 | 0.7994 | 0.6439 | | 0.8316 | 0.9 | 11000 | 0.7886 | 0.6513 | | 0.8239 | 0.94 | 11500 | 0.7847 | 0.6544 | | 0.8247 | 0.98 | 12000 | 0.7848 | 0.6512 | | 0.81 | 1.02 | 12500 | 0.7915 | 0.6507 | | 0.8059 | 1.06 | 13000 | 0.7870 | 0.6519 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
KJIM/kobigbird-pure49-55481524
KJIM
2023-01-09T08:57:33Z
90
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-09T08:24:50Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-pure49-55481524 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-pure49-55481524 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.1357 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 49 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 42 | 1.2047 | | No log | 1.99 | 84 | 1.1357 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Niraya666/q-FrozenLake-v1-4x4-noSlippery
Niraya666
2023-01-09T08:53:03Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T08:52:56Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Niraya666/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ThaiKami/bartpho-word-BA-fix-001
ThaiKami
2023-01-09T08:51:01Z
103
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "legal", "vi", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-01-09T08:23:04Z
--- language: - vi metrics: - rouge library_name: transformers pipeline_tag: text2text-generation tags: - legal ---
padmajabfrl/demo
padmajabfrl
2023-01-09T08:46:16Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T07:33:32Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0059 | 1.0 | 4390 | 0.0000 | 1.0 | | 0.0 | 2.0 | 8780 | 0.0000 | 1.0 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
charlemagne/distilbert-base-uncased-new2-mnli
charlemagne
2023-01-09T08:29:50Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T08:25:40Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-new2-mnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-new2-mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2204 - Accuracy: 0.9427 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 164 | 0.4336 | 0.8678 | | No log | 2.0 | 328 | 0.2592 | 0.9320 | | No log | 3.0 | 492 | 0.2546 | 0.9351 | | 0.4501 | 4.0 | 656 | 0.2204 | 0.9427 | | 0.4501 | 5.0 | 820 | 0.2181 | 0.9404 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.8.0+cu111 - Datasets 2.1.0 - Tokenizers 0.11.6
likejazz/xlm-roberta-base-finetuned-panx-all
likejazz
2023-01-09T08:24:10Z
111
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-01-09T08:19:17Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1574 - F1: 0.8504 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 179 | 0.1897 | 0.8147 | | No log | 2.0 | 358 | 0.1624 | 0.8394 | | No log | 3.0 | 537 | 0.1574 | 0.8504 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.13.1+cu117 - Datasets 1.16.1 - Tokenizers 0.10.3
likejazz/xlm-roberta-base-finetuned-panx-en
likejazz
2023-01-09T08:19:05Z
112
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-01-09T08:15:51Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.en metrics: - name: F1 type: f1 value: 0.4989626556016597 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.6888 - F1: 0.4990 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 13 | 1.1149 | 0.1584 | | No log | 2.0 | 26 | 0.7899 | 0.4283 | | No log | 3.0 | 39 | 0.6888 | 0.4990 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.13.1+cu117 - Datasets 1.16.1 - Tokenizers 0.10.3
sudong97/kobigbird-pure23-34112365
sudong97
2023-01-09T08:17:50Z
90
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-09T07:43:09Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-pure23-34112365 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-pure23-34112365 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.6619 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 23 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 42 | 1.5290 | | No log | 1.99 | 84 | 1.3679 | | No log | 2.99 | 126 | 1.6619 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
likejazz/xlm-roberta-base-finetuned-panx-fr
likejazz
2023-01-09T08:11:43Z
108
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-01-09T08:08:17Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8205897051474264 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2859 - F1: 0.8206 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 48 | 0.4009 | 0.7464 | | No log | 2.0 | 96 | 0.3035 | 0.7971 | | No log | 3.0 | 144 | 0.2859 | 0.8206 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.13.1+cu117 - Datasets 1.16.1 - Tokenizers 0.10.3
likejazz/xlm-roberta-base-finetuned-panx-de
likejazz
2023-01-09T08:00:12Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-01-06T07:37:23Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8515740425048302 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1351 - F1: 0.8516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 132 | 0.1641 | 0.8141 | | No log | 2.0 | 264 | 0.1410 | 0.8399 | | No log | 3.0 | 396 | 0.1351 | 0.8516 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.13.1+cu117 - Datasets 1.16.1 - Tokenizers 0.10.3
LarryAIDraw/bocchi3-20000
LarryAIDraw
2023-01-09T07:59:40Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-01-09T07:59:20Z
--- license: creativeml-openrail-m ---
AdiKompella/Reinforce-PixelCopter
AdiKompella
2023-01-09T07:49:12Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T07:49:08Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 22.20 +/- 21.60 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
muhtasham/tiny-mlm-glue-rte-target-glue-qqp
muhtasham
2023-01-09T07:34:18Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T06:40:02Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: tiny-mlm-glue-rte-target-glue-qqp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-rte-target-glue-qqp This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-rte](https://huggingface.co/muhtasham/tiny-mlm-glue-rte) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4155 - Accuracy: 0.7949 - F1: 0.7691 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.5776 | 0.04 | 500 | 0.5189 | 0.7264 | 0.6855 | | 0.5081 | 0.09 | 1000 | 0.4824 | 0.7519 | 0.7059 | | 0.4951 | 0.13 | 1500 | 0.4940 | 0.7377 | 0.7141 | | 0.4792 | 0.18 | 2000 | 0.4704 | 0.7526 | 0.7221 | | 0.4722 | 0.22 | 2500 | 0.4571 | 0.7618 | 0.7277 | | 0.4557 | 0.26 | 3000 | 0.4496 | 0.7677 | 0.7346 | | 0.4567 | 0.31 | 3500 | 0.4480 | 0.7677 | 0.7378 | | 0.4497 | 0.35 | 4000 | 0.4502 | 0.7655 | 0.7386 | | 0.4503 | 0.4 | 4500 | 0.4426 | 0.7712 | 0.7432 | | 0.4412 | 0.44 | 5000 | 0.4216 | 0.7889 | 0.7501 | | 0.4291 | 0.48 | 5500 | 0.4284 | 0.7837 | 0.7515 | | 0.4293 | 0.53 | 6000 | 0.4075 | 0.8004 | 0.7577 | | 0.4241 | 0.57 | 6500 | 0.4230 | 0.7879 | 0.7559 | | 0.4253 | 0.62 | 7000 | 0.4067 | 0.8002 | 0.7601 | | 0.4166 | 0.66 | 7500 | 0.4083 | 0.8026 | 0.7646 | | 0.4302 | 0.7 | 8000 | 0.4121 | 0.7964 | 0.7624 | | 0.4206 | 0.75 | 8500 | 0.3993 | 0.8051 | 0.7667 | | 0.4147 | 0.79 | 9000 | 0.4202 | 0.7884 | 0.7610 | | 0.4117 | 0.84 | 9500 | 0.3915 | 0.8094 | 0.7677 | | 0.4131 | 0.88 | 10000 | 0.3863 | 0.8156 | 0.7735 | | 0.4089 | 0.92 | 10500 | 0.3832 | 0.8157 | 0.7713 | | 0.4086 | 0.97 | 11000 | 0.3836 | 0.8180 | 0.7732 | | 0.406 | 1.01 | 11500 | 0.4042 | 0.8018 | 0.7707 | | 0.3854 | 1.06 | 12000 | 0.3819 | 0.8182 | 0.7763 | | 0.3952 | 1.1 | 12500 | 0.3836 | 0.8149 | 0.7771 | | 0.3827 | 1.14 | 13000 | 0.3898 | 0.8134 | 0.7766 | | 0.3719 | 1.19 | 13500 | 0.4155 | 0.7949 | 0.7691 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
Nishant91/Reinforce-CartPole8
Nishant91
2023-01-09T06:57:07Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T06:56:57Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole8 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
LarryAIDraw/kblueleaf
LarryAIDraw
2023-01-09T06:56:35Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-01-09T05:49:16Z
--- license: creativeml-openrail-m ---
leoleung93/dqn-SpaceInvadersNoFrameskip-v4
leoleung93
2023-01-09T06:49:36Z
2
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T06:49:08Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 14.50 +/- 12.34 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga leoleung93 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga leoleung93 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga leoleung93 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
yansong/trained_models_3
yansong
2023-01-09T06:41:21Z
0
0
null
[ "region:us" ]
null
2023-01-09T06:41:03Z
This directory includes a few sample datasets to get you started. * `california_housing_data*.csv` is California housing data from the 1990 US Census; more information is available at: https://developers.google.com/machine-learning/crash-course/california-housing-data-description * `mnist_*.csv` is a small sample of the [MNIST database](https://en.wikipedia.org/wiki/MNIST_database), which is described at: http://yann.lecun.com/exdb/mnist/ * `anscombe.json` contains a copy of [Anscombe's quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet); it was originally described in Anscombe, F. J. (1973). 'Graphs in Statistical Analysis'. American Statistician. 27 (1): 17-21. JSTOR 2682899. and our copy was prepared by the [vega_datasets library](https://github.com/altair-viz/vega_datasets/blob/4f67bdaad10f45e3549984e17e1b3088c731503d/vega_datasets/_data/anscombe.json).
KJIM/kobigbird-base29-54981035
KJIM
2023-01-09T06:26:20Z
89
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-09T05:40:59Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-base29-54981035 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-base29-54981035 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 6.2076 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 29 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 42 | 6.2076 | | No log | 1.99 | 84 | 6.2076 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-rte-target-glue-mnli
muhtasham
2023-01-09T06:17:04Z
107
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T05:45:57Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-rte-target-glue-mnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-rte-target-glue-mnli This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-rte](https://huggingface.co/muhtasham/tiny-mlm-glue-rte) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7947 - Accuracy: 0.6475 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0719 | 0.04 | 500 | 1.0318 | 0.4653 | | 1.0131 | 0.08 | 1000 | 0.9779 | 0.5247 | | 0.9748 | 0.12 | 1500 | 0.9293 | 0.5769 | | 0.9415 | 0.16 | 2000 | 0.9073 | 0.5893 | | 0.9255 | 0.2 | 2500 | 0.8888 | 0.6011 | | 0.9168 | 0.24 | 3000 | 0.8789 | 0.6042 | | 0.8998 | 0.29 | 3500 | 0.8704 | 0.6077 | | 0.8948 | 0.33 | 4000 | 0.8624 | 0.6114 | | 0.8791 | 0.37 | 4500 | 0.8571 | 0.6176 | | 0.8832 | 0.41 | 5000 | 0.8501 | 0.6192 | | 0.8742 | 0.45 | 5500 | 0.8423 | 0.6247 | | 0.87 | 0.49 | 6000 | 0.8410 | 0.6280 | | 0.874 | 0.53 | 6500 | 0.8322 | 0.6328 | | 0.8623 | 0.57 | 7000 | 0.8342 | 0.6296 | | 0.8563 | 0.61 | 7500 | 0.8192 | 0.6394 | | 0.8562 | 0.65 | 8000 | 0.8194 | 0.6367 | | 0.8504 | 0.69 | 8500 | 0.8284 | 0.6327 | | 0.8519 | 0.73 | 9000 | 0.8044 | 0.6424 | | 0.8436 | 0.77 | 9500 | 0.8175 | 0.6354 | | 0.8349 | 0.81 | 10000 | 0.8015 | 0.6438 | | 0.8372 | 0.86 | 10500 | 0.8094 | 0.6368 | | 0.835 | 0.9 | 11000 | 0.7958 | 0.6469 | | 0.8291 | 0.94 | 11500 | 0.7922 | 0.6479 | | 0.8274 | 0.98 | 12000 | 0.7938 | 0.6449 | | 0.8158 | 1.02 | 12500 | 0.7971 | 0.6450 | | 0.8111 | 1.06 | 13000 | 0.7947 | 0.6475 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
carrtesy/cartpole-v1
carrtesy
2023-01-09T06:16:27Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T06:15:16Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 376.00 +/- 27.91 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
LarryAIDraw/kblueleaf-hypernet
LarryAIDraw
2023-01-09T05:59:56Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-01-09T05:49:49Z
--- license: creativeml-openrail-m ---
szamanian/sd-class-butterflies-64
szamanian
2023-01-09T05:48:39Z
29
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-01-09T05:23:50Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('szamanian/sd-class-butterflies-32') image = pipeline().images[0] image ```
muhtasham/tiny-mlm-glue-rte-target-glue-cola
muhtasham
2023-01-09T05:42:10Z
103
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T05:30:08Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: tiny-mlm-glue-rte-target-glue-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-rte-target-glue-cola This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-rte](https://huggingface.co/muhtasham/tiny-mlm-glue-rte) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7986 - Matthews Correlation: 0.1168 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6097 | 1.87 | 500 | 0.6209 | 0.0 | | 0.6011 | 3.73 | 1000 | 0.6173 | 0.0 | | 0.5827 | 5.6 | 1500 | 0.6197 | 0.0622 | | 0.5534 | 7.46 | 2000 | 0.6410 | 0.0939 | | 0.5244 | 9.33 | 2500 | 0.6664 | 0.1184 | | 0.5087 | 11.19 | 3000 | 0.6684 | 0.1327 | | 0.4867 | 13.06 | 3500 | 0.6789 | 0.0999 | | 0.4693 | 14.93 | 4000 | 0.7124 | 0.1109 | | 0.4483 | 16.79 | 4500 | 0.7333 | 0.1388 | | 0.4303 | 18.66 | 5000 | 0.7486 | 0.1287 | | 0.4105 | 20.52 | 5500 | 0.7961 | 0.1321 | | 0.4046 | 22.39 | 6000 | 0.7986 | 0.1168 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
vitorhgomes/Reinforce-Pixelcopter-v3
vitorhgomes
2023-01-09T05:35:31Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T05:30:41Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 26.70 +/- 17.41 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
aplnestrella/pegasus-samsum-14
aplnestrella
2023-01-09T05:30:01Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:samsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-01-09T03:51:48Z
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum-14 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum-14 This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4292 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 14 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7704 | 0.47 | 500 | 1.4958 | | 1.65 | 0.95 | 1000 | 1.4292 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-qqp-target-glue-stsb
muhtasham
2023-01-09T05:23:55Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T05:11:50Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - spearmanr model-index: - name: tiny-mlm-glue-qqp-target-glue-stsb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-qqp-target-glue-stsb This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-qqp](https://huggingface.co/muhtasham/tiny-mlm-glue-qqp) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9234 - Pearson: 0.8132 - Spearmanr: 0.8116 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | 2.9883 | 2.78 | 500 | 1.1659 | 0.7141 | 0.7498 | | 0.9795 | 5.56 | 1000 | 1.0600 | 0.7790 | 0.8006 | | 0.753 | 8.33 | 1500 | 0.9585 | 0.8042 | 0.8166 | | 0.6208 | 11.11 | 2000 | 0.8495 | 0.8153 | 0.8188 | | 0.5239 | 13.89 | 2500 | 0.8834 | 0.8149 | 0.8174 | | 0.4691 | 16.67 | 3000 | 0.9556 | 0.8160 | 0.8195 | | 0.4148 | 19.44 | 3500 | 0.8703 | 0.8180 | 0.8178 | | 0.3779 | 22.22 | 4000 | 0.9027 | 0.8179 | 0.8177 | | 0.3446 | 25.0 | 4500 | 0.9613 | 0.8191 | 0.8194 | | 0.3215 | 27.78 | 5000 | 0.9470 | 0.8162 | 0.8160 | | 0.3034 | 30.56 | 5500 | 0.9345 | 0.8161 | 0.8158 | | 0.28 | 33.33 | 6000 | 0.9234 | 0.8132 | 0.8116 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-qqp-target-glue-sst2
muhtasham
2023-01-09T05:11:04Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T04:54:09Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-qqp-target-glue-sst2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-qqp-target-glue-sst2 This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-qqp](https://huggingface.co/muhtasham/tiny-mlm-glue-qqp) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5039 - Accuracy: 0.8291 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5922 | 0.24 | 500 | 0.4935 | 0.7798 | | 0.4475 | 0.48 | 1000 | 0.4672 | 0.7936 | | 0.3948 | 0.71 | 1500 | 0.4418 | 0.7947 | | 0.3742 | 0.95 | 2000 | 0.4701 | 0.7878 | | 0.3364 | 1.19 | 2500 | 0.4464 | 0.8050 | | 0.318 | 1.43 | 3000 | 0.4442 | 0.8108 | | 0.2982 | 1.66 | 3500 | 0.4462 | 0.8062 | | 0.2942 | 1.9 | 4000 | 0.4449 | 0.8211 | | 0.2759 | 2.14 | 4500 | 0.4794 | 0.8062 | | 0.2554 | 2.38 | 5000 | 0.4390 | 0.8200 | | 0.2476 | 2.61 | 5500 | 0.4339 | 0.8303 | | 0.2572 | 2.85 | 6000 | 0.4432 | 0.8268 | | 0.2383 | 3.09 | 6500 | 0.4562 | 0.8291 | | 0.2339 | 3.33 | 7000 | 0.4548 | 0.8349 | | 0.2178 | 3.56 | 7500 | 0.4400 | 0.8349 | | 0.2156 | 3.8 | 8000 | 0.4745 | 0.8337 | | 0.2135 | 4.04 | 8500 | 0.5039 | 0.8291 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
vitorhgomes/Reinforce-Pixelcopter-v2
vitorhgomes
2023-01-09T05:08:12Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T05:07:27Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 13.86 +/- 15.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
akgeni/pixelcopter-v2
akgeni
2023-01-09T04:52:51Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T04:52:43Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: pixelcopter-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 11.80 +/- 9.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
vitorhgomes/Reinforce-Pixelcopter-v1
vitorhgomes
2023-01-09T04:40:06Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T04:39:00Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 20.40 +/- 15.54 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
TheTeamBuilder/q-FrozenLake-v1-4x4-noSlippery
TheTeamBuilder
2023-01-09T04:38:30Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T04:38:24Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="TheTeamBuilder/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
leeju/08-3-4-distilbert-base-uncased-finetuned-clinc
leeju
2023-01-09T04:12:02Z
27
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T02:14:27Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: 08-3-4-distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: train args: plus metrics: - name: Accuracy type: accuracy value: 0.9151612903225806 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 08-3-4-distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7777 - Accuracy: 0.9152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 3.3018 | 0.7439 | | 3.7971 | 2.0 | 636 | 1.8880 | 0.8406 | | 3.7971 | 3.0 | 954 | 1.1649 | 0.8932 | | 1.7002 | 4.0 | 1272 | 0.8611 | 0.9119 | | 0.9041 | 5.0 | 1590 | 0.7777 | 0.9152 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1 - Datasets 2.8.0 - Tokenizers 0.13.2
EduardoCGarridoMerchan/pixelCopter
EduardoCGarridoMerchan
2023-01-09T04:05:07Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-08T16:09:10Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: pixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 105.30 +/- 132.52 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
egumasa/roberta-base-academic
egumasa
2023-01-09T04:00:27Z
119
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "dataset:orieg/elsevier-oa-cc-by", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-01-05T08:19:33Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer model-index: - name: roberta-base-academic results: [] datasets: - orieg/elsevier-oa-cc-by --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-academic This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on a combination of Elsevier OA CC-by dataset and other corpora of university essays such as [BAWE](https://www.coventry.ac.uk/research/research-directories/current-projects/2015/british-academic-written-english-corpus-bawe/) and [MICUSP](https://elicorpora.info/main). It achieves the following results on the evaluation set: - Loss: 1.4229 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.671 | 1.0 | 338 | 1.5581 | | 1.6395 | 1.99 | 676 | 1.5276 | | 1.5991 | 2.99 | 1014 | 1.5108 | | 1.5659 | 3.99 | 1352 | 1.4903 | | 1.5393 | 4.99 | 1690 | 1.4668 | | 1.5178 | 5.98 | 2028 | 1.4621 | | 1.4962 | 6.98 | 2366 | 1.4388 | | 1.4783 | 7.98 | 2704 | 1.4320 | | 1.4652 | 8.97 | 3042 | 1.4216 | | 1.4542 | 9.97 | 3380 | 1.4180 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Bill010602/dqn-SpaceInvadersNoFrameskip-v4_V4
Bill010602
2023-01-09T03:58:14Z
4
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T03:57:34Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 686.50 +/- 131.00 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bill010602 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bill010602 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Bill010602 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.05), ('exploration_fraction', 0.4), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
bellengc/wav2vec2-large-xls-r-300m-asp-project-bribri
bellengc
2023-01-09T03:35:23Z
76
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-01-09T01:59:53Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-asp-project-bribri results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-asp-project-bribri This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
alphahg/koelectra-base-86371428
alphahg
2023-01-09T03:21:56Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-01-09T02:41:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: koelectra-base-86371428 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # koelectra-base-86371428 This model is a fine-tuned version of [monologg/koelectra-base-v3-discriminator](https://huggingface.co/monologg/koelectra-base-v3-discriminator) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.6169 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 128 - eval_batch_size: 128 - seed: 30 - gradient_accumulation_steps: 8 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.94 | 10 | 1.8078 | | No log | 1.94 | 20 | 1.6169 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-qnli-target-glue-stsb
muhtasham
2023-01-09T02:42:47Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T02:35:29Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - spearmanr model-index: - name: tiny-mlm-glue-qnli-target-glue-stsb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-qnli-target-glue-stsb This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-qnli](https://huggingface.co/muhtasham/tiny-mlm-glue-qnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8934 - Pearson: 0.8154 - Spearmanr: 0.8157 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | 2.952 | 2.78 | 500 | 1.1581 | 0.7199 | 0.7571 | | 0.9583 | 5.56 | 1000 | 1.1118 | 0.7743 | 0.7995 | | 0.7459 | 8.33 | 1500 | 0.9843 | 0.8028 | 0.8182 | | 0.6197 | 11.11 | 2000 | 0.8616 | 0.8165 | 0.8217 | | 0.5182 | 13.89 | 2500 | 0.9113 | 0.8140 | 0.8169 | | 0.4676 | 16.67 | 3000 | 0.9804 | 0.8144 | 0.8183 | | 0.4128 | 19.44 | 3500 | 0.8934 | 0.8154 | 0.8157 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-qnli-target-glue-sst2
muhtasham
2023-01-09T02:34:44Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T02:17:58Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-qnli-target-glue-sst2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-qnli-target-glue-sst2 This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-qnli](https://huggingface.co/muhtasham/tiny-mlm-glue-qnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5008 - Accuracy: 0.8211 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5757 | 0.24 | 500 | 0.4901 | 0.7775 | | 0.4436 | 0.48 | 1000 | 0.4673 | 0.7833 | | 0.3947 | 0.71 | 1500 | 0.4434 | 0.7970 | | 0.3751 | 0.95 | 2000 | 0.4601 | 0.7970 | | 0.3326 | 1.19 | 2500 | 0.4463 | 0.8005 | | 0.316 | 1.43 | 3000 | 0.4510 | 0.8005 | | 0.2981 | 1.66 | 3500 | 0.4367 | 0.8142 | | 0.2929 | 1.9 | 4000 | 0.4383 | 0.8108 | | 0.2746 | 2.14 | 4500 | 0.4873 | 0.8016 | | 0.256 | 2.38 | 5000 | 0.4395 | 0.8165 | | 0.246 | 2.61 | 5500 | 0.4444 | 0.8280 | | 0.2522 | 2.85 | 6000 | 0.4478 | 0.8245 | | 0.2371 | 3.09 | 6500 | 0.4556 | 0.8291 | | 0.2299 | 3.33 | 7000 | 0.4655 | 0.8326 | | 0.2143 | 3.56 | 7500 | 0.4581 | 0.8314 | | 0.2153 | 3.8 | 8000 | 0.4869 | 0.8291 | | 0.2134 | 4.04 | 8500 | 0.5008 | 0.8211 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
AleNunezArroyo/bert-from-scratch-15e-10334t-finetuned-opinion
AleNunezArroyo
2023-01-09T02:31:42Z
114
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-01-09T00:21:36Z
--- tags: - generated_from_trainer model-index: - name: bert-from-scratch-15e-10334t-finetuned-opinion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-from-scratch-15e-10334t-finetuned-opinion This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.5936 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.5669 | 1.0 | 902 | 6.2062 | | 6.1906 | 2.0 | 1804 | 6.0842 | | 6.0858 | 3.0 | 2706 | 6.0119 | | 6.0325 | 4.0 | 3608 | 5.9765 | | 5.9894 | 5.0 | 4510 | 5.9406 | | 5.958 | 6.0 | 5412 | 5.9109 | | 5.9195 | 7.0 | 6314 | 5.8513 | | 5.8653 | 8.0 | 7216 | 5.8068 | | 5.8215 | 9.0 | 8118 | 5.7579 | | 5.772 | 10.0 | 9020 | 5.7021 | | 5.7374 | 11.0 | 9922 | 5.6582 | | 5.7041 | 12.0 | 10824 | 5.6425 | | 5.6762 | 13.0 | 11726 | 5.6251 | | 5.6606 | 14.0 | 12628 | 5.6135 | | 5.655 | 15.0 | 13530 | 5.6090 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
DiegoD616/Reinforce-CartPole-v1
DiegoD616
2023-01-09T02:25:08Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T02:08:31Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 486.85 +/- 53.55 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
muhtasham/tiny-mlm-glue-qnli-target-glue-rte
muhtasham
2023-01-09T02:16:51Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T02:12:18Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-qnli-target-glue-rte results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-qnli-target-glue-rte This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-qnli](https://huggingface.co/muhtasham/tiny-mlm-glue-qnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2152 - Accuracy: 0.6029 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6386 | 6.41 | 500 | 0.6664 | 0.6245 | | 0.4313 | 12.82 | 1000 | 0.8105 | 0.6245 | | 0.2642 | 19.23 | 1500 | 1.0035 | 0.6101 | | 0.1617 | 25.64 | 2000 | 1.2152 | 0.6029 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-qnli-target-glue-qqp
muhtasham
2023-01-09T02:10:29Z
104
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T01:16:55Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: tiny-mlm-glue-qnli-target-glue-qqp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-qnli-target-glue-qqp This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-qnli](https://huggingface.co/muhtasham/tiny-mlm-glue-qnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4125 - Accuracy: 0.7971 - F1: 0.7707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.5776 | 0.04 | 500 | 0.5177 | 0.7272 | 0.6831 | | 0.5088 | 0.09 | 1000 | 0.4828 | 0.7515 | 0.7055 | | 0.4952 | 0.13 | 1500 | 0.4939 | 0.7383 | 0.7143 | | 0.4797 | 0.18 | 2000 | 0.4681 | 0.7547 | 0.7225 | | 0.4723 | 0.22 | 2500 | 0.4564 | 0.7621 | 0.7274 | | 0.4551 | 0.26 | 3000 | 0.4475 | 0.7693 | 0.7351 | | 0.4573 | 0.31 | 3500 | 0.4479 | 0.7676 | 0.7372 | | 0.4496 | 0.35 | 4000 | 0.4483 | 0.7668 | 0.7390 | | 0.4503 | 0.4 | 4500 | 0.4413 | 0.7720 | 0.7436 | | 0.4407 | 0.44 | 5000 | 0.4192 | 0.7899 | 0.7498 | | 0.4288 | 0.48 | 5500 | 0.4261 | 0.7845 | 0.7512 | | 0.4292 | 0.53 | 6000 | 0.4058 | 0.8022 | 0.7581 | | 0.4235 | 0.57 | 6500 | 0.4201 | 0.7893 | 0.7560 | | 0.4251 | 0.62 | 7000 | 0.4050 | 0.8007 | 0.7593 | | 0.4161 | 0.66 | 7500 | 0.4063 | 0.8040 | 0.7652 | | 0.4297 | 0.7 | 8000 | 0.4116 | 0.7959 | 0.7617 | | 0.4201 | 0.75 | 8500 | 0.3975 | 0.8069 | 0.7677 | | 0.4142 | 0.79 | 9000 | 0.4186 | 0.7889 | 0.7609 | | 0.4113 | 0.84 | 9500 | 0.3900 | 0.8112 | 0.7687 | | 0.413 | 0.88 | 10000 | 0.3852 | 0.8161 | 0.7732 | | 0.4084 | 0.92 | 10500 | 0.3826 | 0.8161 | 0.7714 | | 0.4083 | 0.97 | 11000 | 0.3826 | 0.8187 | 0.7733 | | 0.4057 | 1.01 | 11500 | 0.4016 | 0.8029 | 0.7711 | | 0.3846 | 1.06 | 12000 | 0.3803 | 0.8187 | 0.7759 | | 0.3949 | 1.1 | 12500 | 0.3827 | 0.8154 | 0.7773 | | 0.3823 | 1.14 | 13000 | 0.3878 | 0.8136 | 0.7763 | | 0.3717 | 1.19 | 13500 | 0.4125 | 0.7971 | 0.7707 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
jkap/ppo-Huggy
jkap
2023-01-09T02:00:55Z
12
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-01-09T02:00:48Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: jkap/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
YuJungSoo/koelectra-50769988
YuJungSoo
2023-01-09T01:55:34Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-01-09T01:09:27Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: koelectra-50769988 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # koelectra-50769988 This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.2600 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 30 - gradient_accumulation_steps: 8 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 21 | 1.4204 | | No log | 1.99 | 42 | 1.2600 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Drasimov/Charo
Drasimov
2023-01-09T01:37:17Z
0
0
nemo
[ "nemo", "es", "en", "dataset:ksang/Summoner-Statistics", "dataset:quinsclr/answerable_tydiqa_statistical", "dataset:wikipedia", "dataset:gamino/wiki_medical_terms", "dataset:medical_dialog", "dataset:bigbio/medical_data", "license:openrail", "region:us" ]
null
2023-01-09T01:30:40Z
--- license: openrail datasets: - ksang/Summoner-Statistics - quinsclr/answerable_tydiqa_statistical - wikipedia - gamino/wiki_medical_terms - medical_dialog - bigbio/medical_data language: - es - en library_name: nemo ---
bellengc/output
bellengc
2023-01-09T01:19:41Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-01-05T00:01:39Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.241648134793786e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
fort-ests/forest
fort-ests
2023-01-09T01:18:31Z
0
0
null
[ "en", "te", "hi", "ta", "ml", "as", "bn", "gu", "mr", "license:bsd", "region:us" ]
null
2023-01-09T01:17:09Z
--- license: bsd language: - en - te - hi - ta - ml - as - bn - gu - mr ---
jpopham91/ppo-Huggy
jpopham91
2023-01-09T00:58:00Z
14
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-01-09T00:57:53Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: jpopham91/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
muhtasham/tiny-mlm-glue-qnli-target-glue-mnli
muhtasham
2023-01-09T00:52:24Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T00:22:58Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-qnli-target-glue-mnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-qnli-target-glue-mnli This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-qnli](https://huggingface.co/muhtasham/tiny-mlm-glue-qnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7907 - Accuracy: 0.6507 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0753 | 0.04 | 500 | 1.0327 | 0.4677 | | 1.0084 | 0.08 | 1000 | 0.9655 | 0.5434 | | 0.962 | 0.12 | 1500 | 0.9232 | 0.5779 | | 0.9358 | 0.16 | 2000 | 0.9087 | 0.5874 | | 0.9241 | 0.2 | 2500 | 0.8928 | 0.5963 | | 0.9157 | 0.24 | 3000 | 0.8772 | 0.5988 | | 0.8992 | 0.29 | 3500 | 0.8687 | 0.6088 | | 0.8928 | 0.33 | 4000 | 0.8571 | 0.6173 | | 0.8757 | 0.37 | 4500 | 0.8529 | 0.6164 | | 0.8774 | 0.41 | 5000 | 0.8438 | 0.6232 | | 0.8694 | 0.45 | 5500 | 0.8372 | 0.6246 | | 0.8653 | 0.49 | 6000 | 0.8350 | 0.6265 | | 0.8677 | 0.53 | 6500 | 0.8268 | 0.6292 | | 0.8584 | 0.57 | 7000 | 0.8270 | 0.6326 | | 0.8508 | 0.61 | 7500 | 0.8134 | 0.6391 | | 0.8521 | 0.65 | 8000 | 0.8110 | 0.6416 | | 0.8447 | 0.69 | 8500 | 0.8264 | 0.6323 | | 0.8466 | 0.73 | 9000 | 0.7951 | 0.6468 | | 0.8379 | 0.77 | 9500 | 0.8089 | 0.6401 | | 0.8277 | 0.81 | 10000 | 0.7941 | 0.6477 | | 0.8307 | 0.86 | 10500 | 0.7999 | 0.6437 | | 0.8289 | 0.9 | 11000 | 0.7874 | 0.6530 | | 0.8228 | 0.94 | 11500 | 0.7835 | 0.6524 | | 0.8228 | 0.98 | 12000 | 0.7851 | 0.6511 | | 0.8078 | 1.02 | 12500 | 0.7907 | 0.6507 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
Jbot/ppo-LunarLander-v2
Jbot
2023-01-09T00:49:36Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-08T22:35:41Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 273.86 +/- 17.81 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
huggingtweets/benshapiro-joerogan-jordanbpeterson
huggingtweets
2023-01-09T00:48:35Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-01-09T00:47:20Z
--- language: en thumbnail: http://www.huggingtweets.com/benshapiro-joerogan-jordanbpeterson/1673225310208/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1580596905721171969/0NnLeJWA_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1407056014776614923/TKBC60e1_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/552307347851210752/vrXDcTFC_400x400.jpeg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Ben Shapiro & Dr Jordan B Peterson & Joe Rogan</div> <div style="text-align: center; font-size: 14px;">@benshapiro-joerogan-jordanbpeterson</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Ben Shapiro & Dr Jordan B Peterson & Joe Rogan. | Data | Ben Shapiro | Dr Jordan B Peterson | Joe Rogan | | --- | --- | --- | --- | | Tweets downloaded | 3244 | 3244 | 3192 | | Retweets | 2399 | 960 | 1129 | | Short tweets | 66 | 198 | 44 | | Tweets kept | 779 | 2086 | 2019 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/319qduw1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @benshapiro-joerogan-jordanbpeterson's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/kq320mm4) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/kq320mm4/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/benshapiro-joerogan-jordanbpeterson') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
BhavyaMuni/taylor-swift-model-temp
BhavyaMuni
2023-01-09T00:36:25Z
103
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-01-09T00:07:11Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: taylor-swift-model-temp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # taylor-swift-model-temp This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1118 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.0072 | 1.0 | 58 | 3.7794 | | 3.8685 | 2.0 | 116 | 3.6857 | | 3.8123 | 3.0 | 174 | 3.6220 | | 3.7141 | 4.0 | 232 | 3.5796 | | 3.3674 | 5.0 | 290 | 3.5402 | | 3.556 | 6.0 | 348 | 3.5092 | | 3.442 | 7.0 | 406 | 3.4829 | | 3.5147 | 8.0 | 464 | 3.4609 | | 3.3591 | 9.0 | 522 | 3.4289 | | 3.3258 | 10.0 | 580 | 3.4135 | | 3.2393 | 11.0 | 638 | 3.3918 | | 3.2989 | 12.0 | 696 | 3.3756 | | 3.2535 | 13.0 | 754 | 3.3557 | | 3.1144 | 14.0 | 812 | 3.3352 | | 2.9332 | 15.0 | 870 | 3.3305 | | 3.0371 | 16.0 | 928 | 3.3078 | | 3.0357 | 17.0 | 986 | 3.2889 | | 2.8728 | 18.0 | 1044 | 3.2851 | | 2.9121 | 19.0 | 1102 | 3.2688 | | 2.9804 | 20.0 | 1160 | 3.2562 | | 2.855 | 21.0 | 1218 | 3.2485 | | 2.7546 | 22.0 | 1276 | 3.2275 | | 2.9248 | 23.0 | 1334 | 3.2233 | | 2.9627 | 24.0 | 1392 | 3.2113 | | 2.891 | 25.0 | 1450 | 3.1965 | | 2.7106 | 26.0 | 1508 | 3.1925 | | 2.8863 | 27.0 | 1566 | 3.1836 | | 2.8311 | 28.0 | 1624 | 3.1869 | | 2.6953 | 29.0 | 1682 | 3.1769 | | 2.7916 | 30.0 | 1740 | 3.1717 | | 2.7262 | 31.0 | 1798 | 3.1609 | | 2.6203 | 32.0 | 1856 | 3.1564 | | 2.7066 | 33.0 | 1914 | 3.1492 | | 2.3818 | 34.0 | 1972 | 3.1475 | | 2.7237 | 35.0 | 2030 | 3.1412 | | 2.4593 | 36.0 | 2088 | 3.1372 | | 2.5471 | 37.0 | 2146 | 3.1298 | | 2.6026 | 38.0 | 2204 | 3.1324 | | 2.5049 | 39.0 | 2262 | 3.1285 | | 2.5509 | 40.0 | 2320 | 3.1262 | | 2.7736 | 41.0 | 2378 | 3.1142 | | 2.7144 | 42.0 | 2436 | 3.1159 | | 2.5972 | 43.0 | 2494 | 3.1145 | | 2.5897 | 44.0 | 2552 | 3.1142 | | 2.4131 | 45.0 | 2610 | 3.1152 | | 2.5602 | 46.0 | 2668 | 3.1130 | | 2.4986 | 47.0 | 2726 | 3.1123 | | 2.5507 | 48.0 | 2784 | 3.1108 | | 2.4885 | 49.0 | 2842 | 3.1124 | | 2.4204 | 50.0 | 2900 | 3.1118 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-qnli-target-glue-cola
muhtasham
2023-01-09T00:19:16Z
103
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-09T00:10:08Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: tiny-mlm-glue-qnli-target-glue-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-qnli-target-glue-cola This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-qnli](https://huggingface.co/muhtasham/tiny-mlm-glue-qnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7322 - Matthews Correlation: 0.1353 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6099 | 1.87 | 500 | 0.6209 | 0.0 | | 0.6009 | 3.73 | 1000 | 0.6169 | 0.0 | | 0.5819 | 5.6 | 1500 | 0.6196 | 0.0545 | | 0.5519 | 7.46 | 2000 | 0.6391 | 0.0997 | | 0.5226 | 9.33 | 2500 | 0.6657 | 0.1182 | | 0.5061 | 11.19 | 3000 | 0.6671 | 0.1357 | | 0.4831 | 13.06 | 3500 | 0.6787 | 0.1205 | | 0.4652 | 14.93 | 4000 | 0.7167 | 0.1264 | | 0.4443 | 16.79 | 4500 | 0.7322 | 0.1353 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
gaalocastillo/wav2vec2-large-xls-r-300m-asp-project-bribri
gaalocastillo
2023-01-09T00:12:44Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-01-08T23:40:16Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-asp-project-bribri results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-asp-project-bribri This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.24e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-mrpc-target-glue-sst2
muhtasham
2023-01-08T23:50:59Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-08T23:34:09Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-mrpc-target-glue-sst2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-mrpc-target-glue-sst2 This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-mrpc](https://huggingface.co/muhtasham/tiny-mlm-glue-mrpc) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4921 - Accuracy: 0.8314 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5814 | 0.24 | 500 | 0.4938 | 0.7706 | | 0.4444 | 0.48 | 1000 | 0.4690 | 0.7844 | | 0.3934 | 0.71 | 1500 | 0.4458 | 0.7982 | | 0.3733 | 0.95 | 2000 | 0.4633 | 0.7890 | | 0.3319 | 1.19 | 2500 | 0.4503 | 0.7982 | | 0.3151 | 1.43 | 3000 | 0.4525 | 0.8028 | | 0.2971 | 1.66 | 3500 | 0.4431 | 0.8142 | | 0.2899 | 1.9 | 4000 | 0.4452 | 0.8108 | | 0.2716 | 2.14 | 4500 | 0.4914 | 0.7993 | | 0.2548 | 2.38 | 5000 | 0.4419 | 0.8177 | | 0.2443 | 2.61 | 5500 | 0.4475 | 0.8245 | | 0.2515 | 2.85 | 6000 | 0.4462 | 0.8257 | | 0.2357 | 3.09 | 6500 | 0.4509 | 0.8314 | | 0.2279 | 3.33 | 7000 | 0.4641 | 0.8337 | | 0.2134 | 3.56 | 7500 | 0.4615 | 0.8326 | | 0.2136 | 3.8 | 8000 | 0.4882 | 0.8314 | | 0.2122 | 4.04 | 8500 | 0.4921 | 0.8314 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
Orokusaki/q-FrozenLake-v1-8x8-Slippery
Orokusaki
2023-01-08T23:44:04Z
0
0
null
[ "FrozenLake-v1-8x8", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-08T23:43:59Z
--- tags: - FrozenLake-v1-8x8 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-8x8-Slippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-8x8 type: FrozenLake-v1-8x8 metrics: - type: mean_reward value: 0.12 +/- 0.32 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Orokusaki/q-FrozenLake-v1-8x8-Slippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
muhtasham/tiny-mlm-glue-mrpc-target-glue-rte
muhtasham
2023-01-08T23:33:01Z
106
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-08T23:28:29Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-mrpc-target-glue-rte results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-mrpc-target-glue-rte This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-mrpc](https://huggingface.co/muhtasham/tiny-mlm-glue-mrpc) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2201 - Accuracy: 0.6101 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6409 | 6.41 | 500 | 0.6648 | 0.6209 | | 0.4327 | 12.82 | 1000 | 0.8199 | 0.6173 | | 0.2663 | 19.23 | 1500 | 1.0143 | 0.5921 | | 0.1606 | 25.64 | 2000 | 1.2201 | 0.6101 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
armargolis/Reinforce-Pixelcopter-PLE-v0
armargolis
2023-01-08T22:56:26Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-08T22:56:16Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 46.00 +/- 38.96 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
kelestemur/q-Taxi-v3
kelestemur
2023-01-08T22:43:21Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-08T22:43:17Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="kelestemur/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
kelestemur/q-FrozenLake-v1-4x4-noSlippery
kelestemur
2023-01-08T22:34:19Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-08T22:34:15Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="kelestemur/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
muhtasham/tiny-mlm-glue-mrpc-target-glue-qnli
muhtasham
2023-01-08T22:30:30Z
104
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-08T22:19:38Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-mrpc-target-glue-qnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-mrpc-target-glue-qnli This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-mrpc](https://huggingface.co/muhtasham/tiny-mlm-glue-mrpc) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4717 - Accuracy: 0.7798 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6112 | 0.15 | 500 | 0.5408 | 0.7346 | | 0.5426 | 0.31 | 1000 | 0.5351 | 0.7366 | | 0.522 | 0.46 | 1500 | 0.5029 | 0.7619 | | 0.5151 | 0.61 | 2000 | 0.5191 | 0.7529 | | 0.5116 | 0.76 | 2500 | 0.4829 | 0.7758 | | 0.5052 | 0.92 | 3000 | 0.4673 | 0.7833 | | 0.4909 | 1.07 | 3500 | 0.4521 | 0.7921 | | 0.4811 | 1.22 | 4000 | 0.4689 | 0.7827 | | 0.4672 | 1.37 | 4500 | 0.4819 | 0.7730 | | 0.4744 | 1.53 | 5000 | 0.4717 | 0.7798 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-mrpc-target-glue-mrpc
muhtasham
2023-01-08T22:18:03Z
101
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-08T22:12:42Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: tiny-mlm-glue-mrpc-target-glue-mrpc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-mrpc-target-glue-mrpc This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-mrpc](https://huggingface.co/muhtasham/tiny-mlm-glue-mrpc) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0963 - Accuracy: 0.7034 - F1: 0.7738 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5884 | 4.35 | 500 | 0.5523 | 0.7059 | 0.8046 | | 0.4494 | 8.7 | 1000 | 0.5547 | 0.7574 | 0.8358 | | 0.304 | 13.04 | 1500 | 0.6339 | 0.7525 | 0.8256 | | 0.1927 | 17.39 | 2000 | 0.7843 | 0.7230 | 0.8000 | | 0.1179 | 21.74 | 2500 | 1.0963 | 0.7034 | 0.7738 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-mrpc-target-glue-mnli
muhtasham
2023-01-08T22:11:38Z
103
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-08T21:46:57Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-glue-mrpc-target-glue-mnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-mrpc-target-glue-mnli This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-mrpc](https://huggingface.co/muhtasham/tiny-mlm-glue-mrpc) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8094 - Accuracy: 0.6373 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0737 | 0.04 | 500 | 1.0366 | 0.4615 | | 1.0169 | 0.08 | 1000 | 0.9833 | 0.5194 | | 0.9799 | 0.12 | 1500 | 0.9344 | 0.5719 | | 0.9452 | 0.16 | 2000 | 0.9106 | 0.5879 | | 0.9293 | 0.2 | 2500 | 0.8905 | 0.5962 | | 0.9189 | 0.24 | 3000 | 0.8801 | 0.6026 | | 0.9017 | 0.29 | 3500 | 0.8705 | 0.6103 | | 0.896 | 0.33 | 4000 | 0.8619 | 0.6178 | | 0.881 | 0.37 | 4500 | 0.8574 | 0.6211 | | 0.8854 | 0.41 | 5000 | 0.8495 | 0.6201 | | 0.8756 | 0.45 | 5500 | 0.8434 | 0.6223 | | 0.8713 | 0.49 | 6000 | 0.8410 | 0.6263 | | 0.8757 | 0.53 | 6500 | 0.8337 | 0.6301 | | 0.8624 | 0.57 | 7000 | 0.8363 | 0.6284 | | 0.8576 | 0.61 | 7500 | 0.8203 | 0.6356 | | 0.8583 | 0.65 | 8000 | 0.8188 | 0.6378 | | 0.8523 | 0.69 | 8500 | 0.8294 | 0.6304 | | 0.8533 | 0.73 | 9000 | 0.8052 | 0.6429 | | 0.8448 | 0.77 | 9500 | 0.8180 | 0.6356 | | 0.8368 | 0.81 | 10000 | 0.8030 | 0.6399 | | 0.8389 | 0.86 | 10500 | 0.8094 | 0.6373 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
nolanaatama/stable-diffusion-webui
nolanaatama
2023-01-08T22:09:12Z
0
10
null
[ "region:us" ]
null
2023-01-08T22:05:44Z
# Stable Diffusion web UI A browser interface based on Gradio library for Stable Diffusion. ![](txt2img_Screenshot.png) Check the [custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) wiki page for extra scripts developed by users. ## Features [Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features): - Original txt2img and img2img modes - One click install and run script (but you still must install python and git) - Outpainting - Inpainting - Color Sketch - Prompt Matrix - Stable Diffusion Upscale - Attention, specify parts of text that the model should pay more attention to - a man in a ((tuxedo)) - will pay more attention to tuxedo - a man in a (tuxedo:1.21) - alternative syntax - select text and press ctrl+up or ctrl+down to automatically adjust attention to selected text (code contributed by anonymous user) - Loopback, run img2img processing multiple times - X/Y plot, a way to draw a 2 dimensional plot of images with different parameters - Textual Inversion - have as many embeddings as you want and use any names you like for them - use multiple embeddings with different numbers of vectors per token - works with half precision floating point numbers - train embeddings on 8GB (also reports of 6GB working) - Extras tab with: - GFPGAN, neural network that fixes faces - CodeFormer, face restoration tool as an alternative to GFPGAN - RealESRGAN, neural network upscaler - ESRGAN, neural network upscaler with a lot of third party models - SwinIR and Swin2SR([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers - LDSR, Latent diffusion super resolution upscaling - Resizing aspect ratio options - Sampling method selection - Adjust sampler eta values (noise multiplier) - More advanced noise setting options - Interrupt processing at any time - 4GB video card support (also reports of 2GB working) - Correct seeds for batches - Live prompt token length validation - Generation parameters - parameters you used to generate images are saved with that image - in PNG chunks for PNG, in EXIF for JPEG - can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI - can be disabled in settings - drag and drop an image/text-parameters to promptbox - Read Generation Parameters Button, loads parameters in promptbox to UI - Settings page - Running arbitrary python code from UI (must run with --allow-code to enable) - Mouseover hints for most UI elements - Possible to change defaults/mix/max/step values for UI elements via text config - Random artist button - Tiling support, a checkbox to create images that can be tiled like textures - Progress bar and live image generation preview - Negative prompt, an extra text field that allows you to list what you don't want to see in generated image - Styles, a way to save part of prompt and easily apply them via dropdown later - Variations, a way to generate same image but with tiny differences - Seed resizing, a way to generate same image but at slightly different resolution - CLIP interrogator, a button that tries to guess prompt from an image - Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway - Batch Processing, process a group of files using img2img - Img2img Alternative, reverse Euler method of cross attention control - Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions - Reloading checkpoints on the fly - Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one - [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community - [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once - separate prompts using uppercase `AND` - also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2` - No token limit for prompts (original stable diffusion lets you use up to 75 tokens) - DeepDanbooru integration, creates danbooru style tags for anime prompts - [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add --xformers to commandline args) - via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI - Generate forever option - Training tab - hypernetworks and embeddings options - Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime) - Clip skip - Use Hypernetworks - Use VAEs - Estimated completion time in progress bar - API - Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML. - via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients)) - [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions ## Installation and Running Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs. Alternatively, use online services (like Google Colab): - [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services) ### Automatic Installation on Windows 1. Install [Python 3.10.6](https://www.python.org/downloads/windows/), checking "Add Python to PATH" 2. Install [git](https://git-scm.com/download/win). 3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`. 4. Place `model.ckpt` in the `models` directory (see [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) for where to get it). 5. _*(Optional)*_ Place `GFPGANv1.4.pth` in the base directory, alongside `webui.py` (see [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) for where to get it). 6. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user. ### Automatic Installation on Linux 1. Install the dependencies: ```bash # Debian-based: sudo apt install wget git python3 python3-venv # Red Hat-based: sudo dnf install wget git python3 # Arch-based: sudo pacman -S wget git python3 ``` 2. To install in `/home/$(whoami)/stable-diffusion-webui/`, run: ```bash bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh) ``` ### Installation on Apple Silicon Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon). ## Contributing Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing) ## Documentation The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki). ## Credits - Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers - k-diffusion - https://github.com/crowsonkb/k-diffusion.git - GFPGAN - https://github.com/TencentARC/GFPGAN.git - CodeFormer - https://github.com/sczhou/CodeFormer - ESRGAN - https://github.com/xinntao/ESRGAN - SwinIR - https://github.com/JingyunLiang/SwinIR - Swin2SR - https://github.com/mv-lab/swin2sr - LDSR - https://github.com/Hafiidz/latent-diffusion - MiDaS - https://github.com/isl-org/MiDaS - Ideas for optimizations - https://github.com/basujindal/stable-diffusion - Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing. - Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion) - Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas). - Idea for SD upscale - https://github.com/jquesnelle/txt2imghd - Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot - CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator - Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch - xformers - https://github.com/facebookresearch/xformers - DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru - Security advice - RyotaK - Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user. - (You)
ManuD/videomae-base-finetuned-dfl_clips
ManuD
2023-01-08T22:04:14Z
63
0
transformers
[ "transformers", "pytorch", "tensorboard", "videomae", "video-classification", "generated_from_trainer", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2023-01-08T17:59:48Z
--- license: cc-by-nc-4.0 tags: - generated_from_trainer model-index: - name: videomae-base-finetuned-dfl_clips results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-dfl_clips This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 532 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
kelestemur/deep_rl
kelestemur
2023-01-08T21:58:21Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-08T21:57:58Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 264.57 +/- 20.35 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Closen/ppo-LunarLander-v2
Closen
2023-01-08T21:54:31Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-08T21:54:03Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 244.97 +/- 28.21 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
muhtasham/tiny-mlm-glue-mrpc-target-glue-cola
muhtasham
2023-01-08T21:43:14Z
103
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-08T21:31:31Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: tiny-mlm-glue-mrpc-target-glue-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-mrpc-target-glue-cola This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-mrpc](https://huggingface.co/muhtasham/tiny-mlm-glue-mrpc) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7869 - Matthews Correlation: 0.1551 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6097 | 1.87 | 500 | 0.6213 | 0.0 | | 0.6008 | 3.73 | 1000 | 0.6170 | 0.0 | | 0.5827 | 5.6 | 1500 | 0.6185 | 0.0615 | | 0.5534 | 7.46 | 2000 | 0.6389 | 0.1043 | | 0.5246 | 9.33 | 2500 | 0.6589 | 0.1507 | | 0.5102 | 11.19 | 3000 | 0.6608 | 0.1476 | | 0.4873 | 13.06 | 3500 | 0.6693 | 0.1282 | | 0.4681 | 14.93 | 4000 | 0.7066 | 0.1577 | | 0.448 | 16.79 | 4500 | 0.7266 | 0.1613 | | 0.4302 | 18.66 | 5000 | 0.7454 | 0.1446 | | 0.4108 | 20.52 | 5500 | 0.7858 | 0.1595 | | 0.4023 | 22.39 | 6000 | 0.7869 | 0.1551 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2