modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-14 18:27:59
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
520 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-14 18:27:48
card
stringlengths
11
1.01M
brianrp2000/ppo-LunarLander-v2
brianrp2000
2022-10-24T20:38:53Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T20:38:20Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.46 +/- 14.17 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
stuartmesham/xlnet-large_spell_5k_2_p3
stuartmesham
2022-10-24T18:44:01Z
10
0
transformers
[ "transformers", "pytorch", "xlnet", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T18:43:08Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlnet-large_spell_5k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-large_spell_5k_2_p3 This model is a fine-tuned version of [model_saves/xlnet-large_spell_5k_2_p2](https://huggingface.co/model_saves/xlnet-large_spell_5k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4726 - Accuracy: 0.9405 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4353 | 0.9405 | | No log | 2.0 | 536 | 0.4413 | 0.9400 | | No log | 3.0 | 804 | 0.4726 | 0.9405 | | 0.3275 | 4.0 | 1072 | 0.5153 | 0.9397 | | 0.3275 | 5.0 | 1340 | 0.5466 | 0.9391 | | 0.3275 | 6.0 | 1608 | 0.5922 | 0.9385 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/xlnet-large_spell_5k_1_p3
stuartmesham
2022-10-24T18:43:06Z
8
0
transformers
[ "transformers", "pytorch", "xlnet", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T18:42:11Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlnet-large_spell_5k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-large_spell_5k_1_p3 This model is a fine-tuned version of [model_saves/xlnet-large_spell_5k_1_p2](https://huggingface.co/model_saves/xlnet-large_spell_5k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4678 - Accuracy: 0.9400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4383 | 0.9397 | | No log | 2.0 | 536 | 0.4678 | 0.9400 | | No log | 3.0 | 804 | 0.4920 | 0.9397 | | 0.2974 | 4.0 | 1072 | 0.5351 | 0.9390 | | 0.2974 | 5.0 | 1340 | 0.5907 | 0.9388 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/xlnet-large_lemon-spell_5k_2_p3
stuartmesham
2022-10-24T18:38:23Z
8
0
transformers
[ "transformers", "pytorch", "xlnet", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T18:37:30Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlnet-large_lemon-spell_5k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-large_lemon-spell_5k_2_p3 This model is a fine-tuned version of [model_saves/xlnet-large_lemon-spell_5k_2_p2](https://huggingface.co/model_saves/xlnet-large_lemon-spell_5k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4461 - Accuracy: 0.9394 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4461 | 0.9394 | | No log | 2.0 | 536 | 0.4657 | 0.9393 | | No log | 3.0 | 804 | 0.4947 | 0.9390 | | 0.2992 | 4.0 | 1072 | 0.5469 | 0.9383 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
michellejieli/test_classifier
michellejieli
2022-10-24T18:36:02Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T18:33:07Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: test_classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_classifier This model is a fine-tuned version of [j-hartmann/emotion-english-distilroberta-base](https://huggingface.co/j-hartmann/emotion-english-distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8668 - Accuracy: 0.7337 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7506 | 1.0 | 587 | 0.8760 | 0.7145 | | 0.6506 | 2.0 | 1174 | 0.8192 | 0.7303 | | 0.5242 | 3.0 | 1761 | 0.8668 | 0.7337 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
stuartmesham/xlnet-large_lemon_5k_2_p3
stuartmesham
2022-10-24T18:32:45Z
9
0
transformers
[ "transformers", "pytorch", "xlnet", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T18:31:51Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlnet-large_lemon_5k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-large_lemon_5k_2_p3 This model is a fine-tuned version of [model_saves/xlnet-large_lemon_5k_2_p2](https://huggingface.co/model_saves/xlnet-large_lemon_5k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4422 - Accuracy: 0.9397 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4422 | 0.9397 | | No log | 2.0 | 536 | 0.4614 | 0.9394 | | No log | 3.0 | 804 | 0.4924 | 0.9390 | | 0.2986 | 4.0 | 1072 | 0.5440 | 0.9389 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/xlnet-large_lemon_5k_1_p3
stuartmesham
2022-10-24T18:31:48Z
8
0
transformers
[ "transformers", "pytorch", "xlnet", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T18:30:55Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlnet-large_lemon_5k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-large_lemon_5k_1_p3 This model is a fine-tuned version of [model_saves/xlnet-large_lemon_5k_1_p2](https://huggingface.co/model_saves/xlnet-large_lemon_5k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4483 - Accuracy: 0.9406 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4327 | 0.9397 | | No log | 2.0 | 536 | 0.4483 | 0.9406 | | No log | 3.0 | 804 | 0.4814 | 0.9404 | | 0.3281 | 4.0 | 1072 | 0.5127 | 0.9394 | | 0.3281 | 5.0 | 1340 | 0.5563 | 0.9391 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/xlnet-large_lemon_10k_2_p3
stuartmesham
2022-10-24T18:29:56Z
8
0
transformers
[ "transformers", "pytorch", "xlnet", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T18:28:10Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlnet-large_lemon_10k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-large_lemon_10k_2_p3 This model is a fine-tuned version of [model_saves/xlnet-large_lemon_10k_2_p2](https://huggingface.co/model_saves/xlnet-large_lemon_10k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4726 - Accuracy: 0.9399 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4533 | 0.9398 | | No log | 2.0 | 536 | 0.4726 | 0.9399 | | No log | 3.0 | 804 | 0.5045 | 0.9393 | | 0.2939 | 4.0 | 1072 | 0.5533 | 0.9390 | | 0.2939 | 5.0 | 1340 | 0.6086 | 0.9388 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/xlnet-large_basetags_5k_1_p3
stuartmesham
2022-10-24T18:25:18Z
9
0
transformers
[ "transformers", "pytorch", "xlnet", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T18:24:26Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlnet-large_basetags_5k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-large_basetags_5k_1_p3 This model is a fine-tuned version of [model_saves/xlnet-large_basetags_5k_1_p2](https://huggingface.co/model_saves/xlnet-large_basetags_5k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4744 - Accuracy: 0.9398 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4461 | 0.9394 | | No log | 2.0 | 536 | 0.4744 | 0.9398 | | No log | 3.0 | 804 | 0.5171 | 0.9392 | | 0.273 | 4.0 | 1072 | 0.5515 | 0.9384 | | 0.273 | 5.0 | 1340 | 0.6133 | 0.9383 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/xlnet-large_basetags_10k_2_p3
stuartmesham
2022-10-24T18:23:26Z
8
0
transformers
[ "transformers", "pytorch", "xlnet", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T18:22:31Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlnet-large_basetags_10k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-large_basetags_10k_2_p3 This model is a fine-tuned version of [model_saves/xlnet-large_basetags_10k_2_p2](https://huggingface.co/model_saves/xlnet-large_basetags_10k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4800 - Accuracy: 0.9405 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4432 | 0.9404 | | No log | 2.0 | 536 | 0.4482 | 0.9401 | | No log | 3.0 | 804 | 0.4800 | 0.9405 | | 0.3219 | 4.0 | 1072 | 0.5201 | 0.9400 | | 0.3219 | 5.0 | 1340 | 0.5552 | 0.9394 | | 0.3219 | 6.0 | 1608 | 0.6083 | 0.9387 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
nick-carroll1/hf_fine_tune_hello_world
nick-carroll1
2022-10-24T18:17:14Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:yelp_review_full", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T18:14:22Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - yelp_review_full metrics: - accuracy model-index: - name: hf_fine_tune_hello_world results: - task: name: Text Classification type: text-classification dataset: name: yelp_review_full type: yelp_review_full config: yelp_review_full split: train args: yelp_review_full metrics: - name: Accuracy type: accuracy value: 0.592 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hf_fine_tune_hello_world This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset. It achieves the following results on the evaluation set: - Loss: 1.0142 - Accuracy: 0.592 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 1.0844 | 0.529 | | No log | 2.0 | 250 | 1.0022 | 0.58 | | No log | 3.0 | 375 | 1.0142 | 0.592 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_spell_5k_5_p3
stuartmesham
2022-10-24T18:09:43Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T18:08:48Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_spell_5k_5_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_spell_5k_5_p3 This model is a fine-tuned version of [model_saves/roberta-large_spell_5k_5_p2](https://huggingface.co/model_saves/roberta-large_spell_5k_5_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4416 - Accuracy: 0.9388 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 82 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4416 | 0.9388 | | No log | 2.0 | 536 | 0.4567 | 0.9384 | | No log | 3.0 | 804 | 0.5054 | 0.9386 | | 0.2675 | 4.0 | 1072 | 0.5354 | 0.9385 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_spell_5k_1_p3
stuartmesham
2022-10-24T18:04:38Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T18:03:45Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_spell_5k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_spell_5k_1_p3 This model is a fine-tuned version of [model_saves/roberta-large_spell_5k_1_p2](https://huggingface.co/model_saves/roberta-large_spell_5k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4189 - Accuracy: 0.9395 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4189 | 0.9395 | | No log | 2.0 | 536 | 0.4434 | 0.9393 | | No log | 3.0 | 804 | 0.4638 | 0.9381 | | 0.2911 | 4.0 | 1072 | 0.5136 | 0.9385 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_spell_10k_3_p3
stuartmesham
2022-10-24T18:03:42Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T18:02:49Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_spell_10k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_spell_10k_3_p3 This model is a fine-tuned version of [model_saves/roberta-large_spell_10k_3_p2](https://huggingface.co/model_saves/roberta-large_spell_10k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4350 - Accuracy: 0.9404 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | No log | 1.0 | 268 | 0.9404 | 0.4350 | | No log | 2.0 | 536 | 0.9394 | 0.4450 | | No log | 3.0 | 804 | 0.9388 | 0.4803 | | 0.2844 | 4.0 | 1072 | 0.9386 | 0.5240 | | 0.2844 | 5.0 | 1340 | 0.5639 | 0.9384 | | 0.2844 | 6.0 | 1608 | 0.6261 | 0.9387 | | 0.2844 | 7.0 | 1876 | 0.6881 | 0.9388 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_spell_10k_2_p3
stuartmesham
2022-10-24T18:02:46Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T18:01:52Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_spell_10k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_spell_10k_2_p3 This model is a fine-tuned version of [model_saves/roberta-large_spell_10k_2_p2](https://huggingface.co/model_saves/roberta-large_spell_10k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4256 - Accuracy: 0.9409 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | No log | 1.0 | 268 | 0.9409 | 0.4256 | | No log | 2.0 | 536 | 0.9408 | 0.4378 | | No log | 3.0 | 804 | 0.9401 | 0.4636 | | 0.3125 | 4.0 | 1072 | 0.9389 | 0.4978 | | 0.3125 | 5.0 | 1340 | 0.5485 | 0.9397 | | 0.3125 | 6.0 | 1608 | 0.5955 | 0.9387 | | 0.3125 | 7.0 | 1876 | 0.6463 | 0.9379 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_spell_10k_1_p3
stuartmesham
2022-10-24T18:01:49Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T18:00:34Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_spell_10k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_spell_10k_1_p3 This model is a fine-tuned version of [model_saves/roberta-large_spell_10k_1_p2](https://huggingface.co/model_saves/roberta-large_spell_10k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4478 - Accuracy: 0.9400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | No log | 1.0 | 268 | 0.9394 | 0.4278 | | No log | 2.0 | 536 | 0.9400 | 0.4478 | | No log | 3.0 | 804 | 0.9385 | 0.4739 | | 0.2854 | 4.0 | 1072 | 0.9386 | 0.5202 | | 0.2854 | 5.0 | 1340 | 0.9399 | 0.5863 | | 0.2854 | 6.0 | 1608 | 0.6210 | 0.9392 | | 0.2854 | 7.0 | 1876 | 0.6682 | 0.9385 | | 0.1207 | 8.0 | 2144 | 0.7322 | 0.9382 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_lemon-spell_5k_5_p3
stuartmesham
2022-10-24T17:59:36Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:58:44Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_lemon-spell_5k_5_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_lemon-spell_5k_5_p3 This model is a fine-tuned version of [model_saves/roberta-large_lemon-spell_5k_5_p2](https://huggingface.co/model_saves/roberta-large_lemon-spell_5k_5_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4791 - Accuracy: 0.9391 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 82 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4315 | 0.9391 | | No log | 2.0 | 536 | 0.4467 | 0.9387 | | No log | 3.0 | 804 | 0.4791 | 0.9391 | | 0.2901 | 4.0 | 1072 | 0.5057 | 0.9386 | | 0.2901 | 5.0 | 1340 | 0.5766 | 0.9374 | | 0.2901 | 6.0 | 1608 | 0.6426 | 0.9384 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_lemon-spell_5k_4_p3
stuartmesham
2022-10-24T17:58:41Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:57:48Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_lemon-spell_5k_4_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_lemon-spell_5k_4_p3 This model is a fine-tuned version of [model_saves/roberta-large_lemon-spell_5k_4_p2](https://huggingface.co/model_saves/roberta-large_lemon-spell_5k_4_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4209 - Accuracy: 0.9401 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 72 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4209 | 0.9401 | | No log | 2.0 | 536 | 0.4434 | 0.9392 | | No log | 3.0 | 804 | 0.4690 | 0.9395 | | 0.2919 | 4.0 | 1072 | 0.5258 | 0.9378 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_lemon-spell_5k_3_p3
stuartmesham
2022-10-24T17:57:46Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:56:51Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_lemon-spell_5k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_lemon-spell_5k_3_p3 This model is a fine-tuned version of [model_saves/roberta-large_lemon-spell_5k_3_p2](https://huggingface.co/model_saves/roberta-large_lemon-spell_5k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4501 - Accuracy: 0.9388 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4378 | 0.9387 | | No log | 2.0 | 536 | 0.4501 | 0.9388 | | No log | 3.0 | 804 | 0.4976 | 0.9381 | | 0.272 | 4.0 | 1072 | 0.5395 | 0.9381 | | 0.272 | 5.0 | 1340 | 0.5934 | 0.9376 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
kroos/autotrain-book_recommender-1867863842
kroos
2022-10-24T17:57:11Z
4
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "en", "dataset:kroos/autotrain-data-book_recommender", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T17:51:56Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - kroos/autotrain-data-book_recommender co2_eq_emissions: emissions: 10.620169750625415 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1867863842 - CO2 Emissions (in grams): 10.6202 ## Validation Metrics - Loss: 0.946 - Accuracy: 0.594 - Macro F1: 0.387 - Micro F1: 0.594 - Weighted F1: 0.574 - Macro Precision: 0.370 - Micro Precision: 0.594 - Weighted Precision: 0.567 - Macro Recall: 0.417 - Micro Recall: 0.594 - Weighted Recall: 0.594 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/kroos/autotrain-book_recommender-1867863842 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("kroos/autotrain-book_recommender-1867863842", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("kroos/autotrain-book_recommender-1867863842", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
stuartmesham/roberta-large_lemon-spell_5k_1_p3
stuartmesham
2022-10-24T17:55:53Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:54:59Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_lemon-spell_5k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_lemon-spell_5k_1_p3 This model is a fine-tuned version of [model_saves/roberta-large_lemon-spell_5k_1_p2](https://huggingface.co/model_saves/roberta-large_lemon-spell_5k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4276 - Accuracy: 0.9404 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4276 | 0.9404 | | No log | 2.0 | 536 | 0.4368 | 0.9401 | | No log | 3.0 | 804 | 0.4663 | 0.9396 | | 0.3203 | 4.0 | 1072 | 0.5026 | 0.9385 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_lemon-spell_10k_3_p3
stuartmesham
2022-10-24T17:54:56Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:53:37Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_lemon-spell_10k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_lemon-spell_10k_3_p3 This model is a fine-tuned version of [model_saves/roberta-large_lemon-spell_10k_3_p2](https://huggingface.co/model_saves/roberta-large_lemon-spell_10k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4579 - Accuracy: 0.9392 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | No log | 1.0 | 268 | 0.9390 | 0.4454 | | No log | 2.0 | 536 | 0.9392 | 0.4579 | | No log | 3.0 | 804 | 0.9387 | 0.5055 | | 0.2672 | 4.0 | 1072 | 0.9386 | 0.5471 | | 0.2672 | 5.0 | 1340 | 0.9378 | 0.6000 | | 0.2672 | 6.0 | 1608 | 0.6508 | 0.9375 | | 0.2672 | 7.0 | 1876 | 0.7333 | 0.9374 | | 0.1123 | 8.0 | 2144 | 0.7822 | 0.9375 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_lemon-spell_10k_2_p3
stuartmesham
2022-10-24T17:53:34Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:52:42Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_lemon-spell_10k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_lemon-spell_10k_2_p3 This model is a fine-tuned version of [model_saves/roberta-large_lemon-spell_10k_2_p2](https://huggingface.co/model_saves/roberta-large_lemon-spell_10k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4359 - Accuracy: 0.9406 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | No log | 1.0 | 268 | 0.9406 | 0.4359 | | No log | 2.0 | 536 | 0.9399 | 0.4492 | | No log | 3.0 | 804 | 0.9399 | 0.4743 | | 0.2873 | 4.0 | 1072 | 0.9395 | 0.5155 | | 0.2873 | 5.0 | 1340 | 0.5667 | 0.9389 | | 0.2873 | 6.0 | 1608 | 0.6481 | 0.9391 | | 0.2873 | 7.0 | 1876 | 0.6873 | 0.9381 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_lemon_5k_6_p3
stuartmesham
2022-10-24T17:51:06Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:50:13Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_lemon_5k_6_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_lemon_5k_6_p3 This model is a fine-tuned version of [model_saves/roberta-large_lemon_5k_6_p2](https://huggingface.co/model_saves/roberta-large_lemon_5k_6_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4225 - Accuracy: 0.9407 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 92 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4225 | 0.9407 | | No log | 2.0 | 536 | 0.4325 | 0.9404 | | No log | 3.0 | 804 | 0.4516 | 0.9399 | | 0.3173 | 4.0 | 1072 | 0.4899 | 0.9388 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_lemon_5k_5_p3
stuartmesham
2022-10-24T17:50:10Z
7
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:48:19Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_lemon_5k_5_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_lemon_5k_5_p3 This model is a fine-tuned version of [model_saves/roberta-large_lemon_5k_5_p2](https://huggingface.co/model_saves/roberta-large_lemon_5k_5_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4764 - Accuracy: 0.9394 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 82 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4297 | 0.9391 | | No log | 2.0 | 536 | 0.4462 | 0.9390 | | No log | 3.0 | 804 | 0.4764 | 0.9394 | | 0.2902 | 4.0 | 1072 | 0.5053 | 0.9388 | | 0.2902 | 5.0 | 1340 | 0.5689 | 0.9378 | | 0.2902 | 6.0 | 1608 | 0.6370 | 0.9385 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_lemon_5k_4_p3
stuartmesham
2022-10-24T17:48:16Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:45:40Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_lemon_5k_4_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_lemon_5k_4_p3 This model is a fine-tuned version of [model_saves/roberta-large_lemon_5k_4_p2](https://huggingface.co/model_saves/roberta-large_lemon_5k_4_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4195 - Accuracy: 0.9402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 72 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4195 | 0.9402 | | No log | 2.0 | 536 | 0.4397 | 0.9393 | | No log | 3.0 | 804 | 0.4683 | 0.9397 | | 0.29 | 4.0 | 1072 | 0.5288 | 0.9381 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_lemon_5k_2_p3
stuartmesham
2022-10-24T17:44:43Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:43:46Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_lemon_5k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_lemon_5k_2_p3 This model is a fine-tuned version of [model_saves/roberta-large_lemon_5k_2_p2](https://huggingface.co/model_saves/roberta-large_lemon_5k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4294 - Accuracy: 0.9402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4294 | 0.9402 | | No log | 2.0 | 536 | 0.4405 | 0.9396 | | No log | 3.0 | 804 | 0.4707 | 0.9392 | | 0.29 | 4.0 | 1072 | 0.5095 | 0.9388 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_lemon_10k_3_p3
stuartmesham
2022-10-24T17:42:46Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:41:52Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_lemon_10k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_lemon_10k_3_p3 This model is a fine-tuned version of [model_saves/roberta-large_lemon_10k_3_p2](https://huggingface.co/model_saves/roberta-large_lemon_10k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4616 - Accuracy: 0.9394 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | No log | 1.0 | 268 | 0.9393 | 0.4460 | | No log | 2.0 | 536 | 0.9394 | 0.4616 | | No log | 3.0 | 804 | 0.9382 | 0.5016 | | 0.2628 | 4.0 | 1072 | 0.9389 | 0.5514 | | 0.2628 | 5.0 | 1340 | 0.9377 | 0.6032 | | 0.2628 | 6.0 | 1608 | 0.6419 | 0.9375 | | 0.2628 | 7.0 | 1876 | 0.7208 | 0.9377 | | 0.1093 | 8.0 | 2144 | 0.7791 | 0.9376 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_lemon_10k_2_p3
stuartmesham
2022-10-24T17:41:50Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:40:55Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_lemon_10k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_lemon_10k_2_p3 This model is a fine-tuned version of [model_saves/roberta-large_lemon_10k_2_p2](https://huggingface.co/model_saves/roberta-large_lemon_10k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4381 - Accuracy: 0.9402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | No log | 1.0 | 268 | 0.9402 | 0.4381 | | No log | 2.0 | 536 | 0.9396 | 0.4498 | | No log | 3.0 | 804 | 0.9390 | 0.4764 | | 0.2859 | 4.0 | 1072 | 0.9391 | 0.5198 | | 0.2859 | 5.0 | 1340 | 0.5669 | 0.9386 | | 0.2859 | 6.0 | 1608 | 0.6484 | 0.9382 | | 0.2859 | 7.0 | 1876 | 0.6938 | 0.9380 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_lemon_10k_1_p3
stuartmesham
2022-10-24T17:40:53Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:39:42Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_lemon_10k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_lemon_10k_1_p3 This model is a fine-tuned version of [model_saves/roberta-large_lemon_10k_1_p2](https://huggingface.co/model_saves/roberta-large_lemon_10k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4327 - Accuracy: 0.9402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | No log | 1.0 | 268 | 0.9402 | 0.4327 | | No log | 2.0 | 536 | 0.9401 | 0.4409 | | No log | 3.0 | 804 | 0.9397 | 0.4704 | | 0.317 | 4.0 | 1072 | 0.9389 | 0.5034 | | 0.317 | 5.0 | 1340 | 0.5431 | 0.9389 | | 0.317 | 6.0 | 1608 | 0.5830 | 0.9384 | | 0.317 | 7.0 | 1876 | 0.6502 | 0.9387 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_basetags_5k_4_p3
stuartmesham
2022-10-24T17:37:47Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:36:54Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_basetags_5k_4_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_basetags_5k_4_p3 This model is a fine-tuned version of [model_saves/roberta-large_basetags_5k_4_p2](https://huggingface.co/model_saves/roberta-large_basetags_5k_4_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4263 - Accuracy: 0.9403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 72 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4263 | 0.9403 | | No log | 2.0 | 536 | 0.4339 | 0.9400 | | No log | 3.0 | 804 | 0.4699 | 0.9398 | | 0.2897 | 4.0 | 1072 | 0.5028 | 0.9393 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_basetags_5k_2_p3
stuartmesham
2022-10-24T17:35:56Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:35:03Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_basetags_5k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_basetags_5k_2_p3 This model is a fine-tuned version of [model_saves/roberta-large_basetags_5k_2_p2](https://huggingface.co/model_saves/roberta-large_basetags_5k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4162 - Accuracy: 0.9409 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4162 | 0.9409 | | No log | 2.0 | 536 | 0.4259 | 0.9406 | | No log | 3.0 | 804 | 0.4544 | 0.9398 | | 0.3171 | 4.0 | 1072 | 0.4886 | 0.9387 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/roberta-large_basetags_10k_1_p3
stuartmesham
2022-10-24T17:32:05Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:31:14Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large_basetags_10k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_basetags_10k_1_p3 This model is a fine-tuned version of [model_saves/roberta-large_basetags_10k_1_p2](https://huggingface.co/model_saves/roberta-large_basetags_10k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4470 - Accuracy: 0.9398 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | No log | 1.0 | 268 | 0.9392 | 0.4255 | | No log | 2.0 | 536 | 0.9398 | 0.4470 | | No log | 3.0 | 804 | 0.9382 | 0.4726 | | 0.2851 | 4.0 | 1072 | 0.9381 | 0.5148 | | 0.2851 | 5.0 | 1340 | 0.9392 | 0.5858 | | 0.2851 | 6.0 | 1608 | 0.6128 | 0.9386 | | 0.2851 | 7.0 | 1876 | 0.6744 | 0.9382 | | 0.1206 | 8.0 | 2144 | 0.7268 | 0.9378 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_spell_5k_6_p3
stuartmesham
2022-10-24T17:25:29Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:24:41Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_spell_5k_6_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_spell_5k_6_p3 This model is a fine-tuned version of [model_saves/electra-large_spell_5k_6_p2](https://huggingface.co/model_saves/electra-large_spell_5k_6_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4361 - Accuracy: 0.9395 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 92 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4361 | 0.9395 | | No log | 2.0 | 536 | 0.4487 | 0.9385 | | No log | 3.0 | 804 | 0.4750 | 0.9388 | | 0.3204 | 4.0 | 1072 | 0.4949 | 0.9371 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_spell_5k_5_p3
stuartmesham
2022-10-24T17:24:39Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:23:51Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_spell_5k_5_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_spell_5k_5_p3 This model is a fine-tuned version of [model_saves/electra-large_spell_5k_5_p2](https://huggingface.co/model_saves/electra-large_spell_5k_5_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4331 - Accuracy: 0.9400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 82 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4331 | 0.9400 | | No log | 2.0 | 536 | 0.4424 | 0.9393 | | No log | 3.0 | 804 | 0.4650 | 0.9392 | | 0.3503 | 4.0 | 1072 | 0.4915 | 0.9383 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_spell_5k_2_p3
stuartmesham
2022-10-24T17:22:06Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:21:16Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_spell_5k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_spell_5k_2_p3 This model is a fine-tuned version of [model_saves/electra-large_spell_5k_2_p2](https://huggingface.co/model_saves/electra-large_spell_5k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4383 - Accuracy: 0.9398 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4383 | 0.9398 | | No log | 2.0 | 536 | 0.4530 | 0.9390 | | No log | 3.0 | 804 | 0.4767 | 0.9389 | | 0.3217 | 4.0 | 1072 | 0.5029 | 0.9378 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_spell_10k_2_p3
stuartmesham
2022-10-24T17:19:32Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:18:42Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_spell_10k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_spell_10k_2_p3 This model is a fine-tuned version of [model_saves/electra-large_spell_10k_2_p2](https://huggingface.co/model_saves/electra-large_spell_10k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4425 - Accuracy: 0.9397 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4425 | 0.9397 | | No log | 2.0 | 536 | 0.4513 | 0.9394 | | No log | 3.0 | 804 | 0.4718 | 0.9392 | | 0.3481 | 4.0 | 1072 | 0.4944 | 0.9377 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_lemon-spell_5k_6_p3
stuartmesham
2022-10-24T17:17:49Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:16:58Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_lemon-spell_5k_6_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_lemon-spell_5k_6_p3 This model is a fine-tuned version of [model_saves/electra-large_lemon-spell_5k_6_p2](https://huggingface.co/model_saves/electra-large_lemon-spell_5k_6_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4356 - Accuracy: 0.9402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 92 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4356 | 0.9402 | | No log | 2.0 | 536 | 0.4461 | 0.9390 | | No log | 3.0 | 804 | 0.4616 | 0.9392 | | 0.3484 | 4.0 | 1072 | 0.4897 | 0.9385 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_lemon-spell_5k_4_p3
stuartmesham
2022-10-24T17:16:05Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:15:18Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_lemon-spell_5k_4_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_lemon-spell_5k_4_p3 This model is a fine-tuned version of [model_saves/electra-large_lemon-spell_5k_4_p2](https://huggingface.co/model_saves/electra-large_lemon-spell_5k_4_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4388 - Accuracy: 0.9390 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 72 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4388 | 0.9390 | | No log | 2.0 | 536 | 0.4498 | 0.9386 | | No log | 3.0 | 804 | 0.4768 | 0.9383 | | 0.3213 | 4.0 | 1072 | 0.5084 | 0.9378 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_lemon-spell_5k_1_p3
stuartmesham
2022-10-24T17:13:24Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:12:37Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_lemon-spell_5k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_lemon-spell_5k_1_p3 This model is a fine-tuned version of [model_saves/electra-large_lemon-spell_5k_1_p2](https://huggingface.co/model_saves/electra-large_lemon-spell_5k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4331 - Accuracy: 0.9401 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4331 | 0.9401 | | No log | 2.0 | 536 | 0.4433 | 0.9400 | | No log | 3.0 | 804 | 0.4620 | 0.9399 | | 0.3485 | 4.0 | 1072 | 0.4910 | 0.9385 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_lemon-spell_10k_3_p3
stuartmesham
2022-10-24T17:12:34Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:11:46Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_lemon-spell_10k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_lemon-spell_10k_3_p3 This model is a fine-tuned version of [model_saves/electra-large_lemon-spell_10k_3_p2](https://huggingface.co/model_saves/electra-large_lemon-spell_10k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4488 - Accuracy: 0.9399 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4436 | 0.9394 | | No log | 2.0 | 536 | 0.4488 | 0.9399 | | No log | 3.0 | 804 | 0.4711 | 0.9395 | | 0.349 | 4.0 | 1072 | 0.4948 | 0.9394 | | 0.349 | 5.0 | 1340 | 0.5264 | 0.9372 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_lemon_5k_6_p3
stuartmesham
2022-10-24T17:09:59Z
7
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:09:10Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_lemon_5k_6_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_lemon_5k_6_p3 This model is a fine-tuned version of [model_saves/electra-large_lemon_5k_6_p2](https://huggingface.co/model_saves/electra-large_lemon_5k_6_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4351 - Accuracy: 0.9400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 92 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4351 | 0.9400 | | No log | 2.0 | 536 | 0.4443 | 0.9391 | | No log | 3.0 | 804 | 0.4579 | 0.9395 | | 0.3493 | 4.0 | 1072 | 0.4852 | 0.9385 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_lemon_5k_4_p3
stuartmesham
2022-10-24T17:08:17Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:07:29Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_lemon_5k_4_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_lemon_5k_4_p3 This model is a fine-tuned version of [model_saves/electra-large_lemon_5k_4_p2](https://huggingface.co/model_saves/electra-large_lemon_5k_4_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4381 - Accuracy: 0.9388 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 72 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4381 | 0.9388 | | No log | 2.0 | 536 | 0.4510 | 0.9383 | | No log | 3.0 | 804 | 0.4731 | 0.9383 | | 0.3237 | 4.0 | 1072 | 0.5063 | 0.9372 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_lemon_5k_2_p3
stuartmesham
2022-10-24T17:06:37Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:05:47Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_lemon_5k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_lemon_5k_2_p3 This model is a fine-tuned version of [model_saves/electra-large_lemon_5k_2_p2](https://huggingface.co/model_saves/electra-large_lemon_5k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4364 - Accuracy: 0.9394 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4364 | 0.9394 | | No log | 2.0 | 536 | 0.4515 | 0.9386 | | No log | 3.0 | 804 | 0.4689 | 0.9385 | | 0.3218 | 4.0 | 1072 | 0.5000 | 0.9380 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_lemon_5k_1_p3
stuartmesham
2022-10-24T17:05:44Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:04:58Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_lemon_5k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_lemon_5k_1_p3 This model is a fine-tuned version of [model_saves/electra-large_lemon_5k_1_p2](https://huggingface.co/model_saves/electra-large_lemon_5k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4420 - Accuracy: 0.9402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4349 | 0.9401 | | No log | 2.0 | 536 | 0.4420 | 0.9402 | | No log | 3.0 | 804 | 0.4655 | 0.9394 | | 0.3514 | 4.0 | 1072 | 0.4920 | 0.9382 | | 0.3514 | 5.0 | 1340 | 0.5162 | 0.9383 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_lemon_10k_3_p3
stuartmesham
2022-10-24T17:04:55Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:04:07Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_lemon_10k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_lemon_10k_3_p3 This model is a fine-tuned version of [model_saves/electra-large_lemon_10k_3_p2](https://huggingface.co/model_saves/electra-large_lemon_10k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4506 - Accuracy: 0.9394 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4451 | 0.9390 | | No log | 2.0 | 536 | 0.4506 | 0.9394 | | No log | 3.0 | 804 | 0.4746 | 0.9391 | | 0.3499 | 4.0 | 1072 | 0.4970 | 0.9390 | | 0.3499 | 5.0 | 1340 | 0.5279 | 0.9370 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
cyburn/bigeyes
cyburn
2022-10-24T17:04:10Z
0
0
null
[ "region:us" ]
null
2022-10-24T15:42:15Z
Dreambooth model from Big Eyes style paintings Sample images from model: https://huggingface.co/cyburn/bigeyes/blob/main/grid-0011.png https://huggingface.co/cyburn/bigeyes/blob/main/grid-0012.png https://huggingface.co/cyburn/bigeyes/blob/main/grid-0013.png Prompt: bigeyes artstyle
stuartmesham/electra-large_lemon_10k_2_p3
stuartmesham
2022-10-24T17:04:05Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T17:03:15Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_lemon_10k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_lemon_10k_2_p3 This model is a fine-tuned version of [model_saves/electra-large_lemon_10k_2_p2](https://huggingface.co/model_saves/electra-large_lemon_10k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4466 - Accuracy: 0.9394 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4466 | 0.9394 | | No log | 2.0 | 536 | 0.4601 | 0.9385 | | No log | 3.0 | 804 | 0.4774 | 0.9384 | | 0.3208 | 4.0 | 1072 | 0.5144 | 0.9384 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_basetags_5k_4_p3
stuartmesham
2022-10-24T17:00:39Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:59:49Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_basetags_5k_4_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_basetags_5k_4_p3 This model is a fine-tuned version of [model_saves/electra-large_basetags_5k_4_p2](https://huggingface.co/model_saves/electra-large_basetags_5k_4_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4405 - Accuracy: 0.9391 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 72 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4405 | 0.9391 | | No log | 2.0 | 536 | 0.4543 | 0.9383 | | No log | 3.0 | 804 | 0.4727 | 0.9381 | | 0.3209 | 4.0 | 1072 | 0.5058 | 0.9376 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_basetags_5k_3_p3
stuartmesham
2022-10-24T16:59:47Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:58:57Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_basetags_5k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_basetags_5k_3_p3 This model is a fine-tuned version of [model_saves/electra-large_basetags_5k_3_p2](https://huggingface.co/model_saves/electra-large_basetags_5k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4375 - Accuracy: 0.9392 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4375 | 0.9392 | | No log | 2.0 | 536 | 0.4485 | 0.9386 | | No log | 3.0 | 804 | 0.4752 | 0.9372 | | 0.3204 | 4.0 | 1072 | 0.4980 | 0.9373 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/electra-large_basetags_5k_1_p3
stuartmesham
2022-10-24T16:58:04Z
6
0
transformers
[ "transformers", "pytorch", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:57:17Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: electra-large_basetags_5k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-large_basetags_5k_1_p3 This model is a fine-tuned version of [model_saves/electra-large_basetags_5k_1_p2](https://huggingface.co/model_saves/electra-large_basetags_5k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4574 - Accuracy: 0.9389 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4389 | 0.9384 | | No log | 2.0 | 536 | 0.4574 | 0.9389 | | No log | 3.0 | 804 | 0.4744 | 0.9379 | | 0.3215 | 4.0 | 1072 | 0.5003 | 0.9375 | | 0.3215 | 5.0 | 1340 | 0.5413 | 0.9378 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-v3-large_spell_10k_1_p3
stuartmesham
2022-10-24T16:39:58Z
6
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:38:38Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large_spell_10k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large_spell_10k_1_p3 This model is a fine-tuned version of [model_saves/deberta-v3-large_spell_10k_1_p2](https://huggingface.co/model_saves/deberta-v3-large_spell_10k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4189 - Accuracy: 0.9424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4189 | 0.9424 | | No log | 2.0 | 536 | 0.4353 | 0.9423 | | No log | 3.0 | 804 | 0.4562 | 0.9416 | | 0.2882 | 4.0 | 1072 | 0.4863 | 0.9408 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-v3-large_lemon-spell_5k_3_p3
stuartmesham
2022-10-24T16:38:35Z
6
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:37:31Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large_lemon-spell_5k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large_lemon-spell_5k_3_p3 This model is a fine-tuned version of [model_saves/deberta-v3-large_lemon-spell_5k_3_p2](https://huggingface.co/model_saves/deberta-v3-large_lemon-spell_5k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4124 - Accuracy: 0.9416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4124 | 0.9416 | | No log | 2.0 | 536 | 0.4219 | 0.9413 | | No log | 3.0 | 804 | 0.4521 | 0.9406 | | 0.2931 | 4.0 | 1072 | 0.4867 | 0.9403 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-v3-large_lemon-spell_5k_2_p3
stuartmesham
2022-10-24T16:37:28Z
6
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:36:27Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large_lemon-spell_5k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large_lemon-spell_5k_2_p3 This model is a fine-tuned version of [model_saves/deberta-v3-large_lemon-spell_5k_2_p2](https://huggingface.co/model_saves/deberta-v3-large_lemon-spell_5k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4167 - Accuracy: 0.9418 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4167 | 0.9418 | | No log | 2.0 | 536 | 0.4368 | 0.9408 | | No log | 3.0 | 804 | 0.4634 | 0.9407 | | 0.2655 | 4.0 | 1072 | 0.5009 | 0.9401 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-v3-large_lemon-spell_10k_2_p3
stuartmesham
2022-10-24T16:33:14Z
5
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:32:12Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large_lemon-spell_10k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large_lemon-spell_10k_2_p3 This model is a fine-tuned version of [model_saves/deberta-v3-large_lemon-spell_10k_2_p2](https://huggingface.co/model_saves/deberta-v3-large_lemon-spell_10k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4464 - Accuracy: 0.9413 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4290 | 0.9413 | | No log | 2.0 | 536 | 0.4464 | 0.9413 | | No log | 3.0 | 804 | 0.4729 | 0.9404 | | 0.2621 | 4.0 | 1072 | 0.5098 | 0.9397 | | 0.2621 | 5.0 | 1340 | 0.5510 | 0.9394 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-v3-large_lemon_5k_3_p3
stuartmesham
2022-10-24T16:30:57Z
6
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:29:47Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large_lemon_5k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large_lemon_5k_3_p3 This model is a fine-tuned version of [model_saves/deberta-v3-large_lemon_5k_3_p2](https://huggingface.co/model_saves/deberta-v3-large_lemon_5k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4203 - Accuracy: 0.9416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4128 | 0.9414 | | No log | 2.0 | 536 | 0.4203 | 0.9416 | | No log | 3.0 | 804 | 0.4517 | 0.9403 | | 0.2959 | 4.0 | 1072 | 0.4774 | 0.9404 | | 0.2959 | 5.0 | 1340 | 0.5193 | 0.9390 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-v3-large_lemon_5k_1_p3
stuartmesham
2022-10-24T16:28:38Z
6
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:27:29Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large_lemon_5k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large_lemon_5k_1_p3 This model is a fine-tuned version of [model_saves/deberta-v3-large_lemon_5k_1_p2](https://huggingface.co/model_saves/deberta-v3-large_lemon_5k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4091 - Accuracy: 0.9417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4091 | 0.9417 | | No log | 2.0 | 536 | 0.4229 | 0.9416 | | No log | 3.0 | 804 | 0.4553 | 0.9412 | | 0.2934 | 4.0 | 1072 | 0.4879 | 0.9405 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-v3-large_lemon_10k_3_p3
stuartmesham
2022-10-24T16:27:26Z
7
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:26:24Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large_lemon_10k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large_lemon_10k_3_p3 This model is a fine-tuned version of [model_saves/deberta-v3-large_lemon_10k_3_p2](https://huggingface.co/model_saves/deberta-v3-large_lemon_10k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4239 - Accuracy: 0.9416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4239 | 0.9416 | | No log | 2.0 | 536 | 0.4313 | 0.9416 | | No log | 3.0 | 804 | 0.4624 | 0.9406 | | 0.2907 | 4.0 | 1072 | 0.4935 | 0.9406 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-v3-large_lemon_10k_1_p3
stuartmesham
2022-10-24T16:25:16Z
6
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:24:13Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large_lemon_10k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large_lemon_10k_1_p3 This model is a fine-tuned version of [model_saves/deberta-v3-large_lemon_10k_1_p2](https://huggingface.co/model_saves/deberta-v3-large_lemon_10k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4348 - Accuracy: 0.9421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4216 | 0.9420 | | No log | 2.0 | 536 | 0.4348 | 0.9421 | | No log | 3.0 | 804 | 0.4651 | 0.9412 | | 0.2904 | 4.0 | 1072 | 0.4938 | 0.9402 | | 0.2904 | 5.0 | 1340 | 0.5352 | 0.9401 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-v3-large_basetags_5k_3_p3
stuartmesham
2022-10-24T16:24:10Z
6
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:23:08Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large_basetags_5k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large_basetags_5k_3_p3 This model is a fine-tuned version of [model_saves/deberta-v3-large_basetags_5k_3_p2](https://huggingface.co/model_saves/deberta-v3-large_basetags_5k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4184 - Accuracy: 0.9421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4074 | 0.9420 | | No log | 2.0 | 536 | 0.4184 | 0.9421 | | No log | 3.0 | 804 | 0.4449 | 0.9406 | | 0.2925 | 4.0 | 1072 | 0.4782 | 0.9405 | | 0.2925 | 5.0 | 1340 | 0.5182 | 0.9399 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-v3-large_basetags_10k_3_p3
stuartmesham
2022-10-24T16:20:57Z
6
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:19:55Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large_basetags_10k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large_basetags_10k_3_p3 This model is a fine-tuned version of [model_saves/deberta-v3-large_basetags_10k_3_p2](https://huggingface.co/model_saves/deberta-v3-large_basetags_10k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4189 - Accuracy: 0.9419 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4189 | 0.9419 | | No log | 2.0 | 536 | 0.4315 | 0.9419 | | No log | 3.0 | 804 | 0.4568 | 0.9405 | | 0.2882 | 4.0 | 1072 | 0.4921 | 0.9403 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-v3-large_basetags_10k_2_p3
stuartmesham
2022-10-24T16:19:52Z
6
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:18:51Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large_basetags_10k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large_basetags_10k_2_p3 This model is a fine-tuned version of [model_saves/deberta-v3-large_basetags_10k_2_p2](https://huggingface.co/model_saves/deberta-v3-large_basetags_10k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4198 - Accuracy: 0.9430 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4198 | 0.9430 | | No log | 2.0 | 536 | 0.4301 | 0.9418 | | No log | 3.0 | 804 | 0.4566 | 0.9411 | | 0.2874 | 4.0 | 1072 | 0.4852 | 0.9404 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_spell_5k_5_p3
stuartmesham
2022-10-24T16:16:07Z
6
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:15:09Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_spell_5k_5_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_spell_5k_5_p3 This model is a fine-tuned version of [model_saves/deberta-large_spell_5k_5_p2](https://huggingface.co/model_saves/deberta-large_spell_5k_5_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4162 - Accuracy: 0.9411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 82 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4162 | 0.9411 | | No log | 2.0 | 536 | 0.4404 | 0.9404 | | No log | 3.0 | 804 | 0.4810 | 0.9403 | | 0.2516 | 4.0 | 1072 | 0.5352 | 0.9393 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_spell_5k_2_p3
stuartmesham
2022-10-24T16:13:06Z
6
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:12:10Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_spell_5k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_spell_5k_2_p3 This model is a fine-tuned version of [model_saves/deberta-large_spell_5k_2_p2](https://huggingface.co/model_saves/deberta-large_spell_5k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4141 - Accuracy: 0.9416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4141 | 0.9416 | | No log | 2.0 | 536 | 0.4367 | 0.9412 | | No log | 3.0 | 804 | 0.4807 | 0.9400 | | 0.255 | 4.0 | 1072 | 0.5355 | 0.9398 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_spell_5k_1_p3
stuartmesham
2022-10-24T16:12:07Z
6
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:10:01Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_spell_5k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_spell_5k_1_p3 This model is a fine-tuned version of [model_saves/deberta-large_spell_5k_1_p2](https://huggingface.co/model_saves/deberta-large_spell_5k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4427 - Accuracy: 0.9413 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4156 | 0.9408 | | No log | 2.0 | 536 | 0.4427 | 0.9413 | | No log | 3.0 | 804 | 0.4710 | 0.9407 | | 0.2543 | 4.0 | 1072 | 0.5293 | 0.9397 | | 0.2543 | 5.0 | 1340 | 0.5923 | 0.9391 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_spell_10k_1_p3
stuartmesham
2022-10-24T16:07:56Z
6
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:06:57Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_spell_10k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_spell_10k_1_p3 This model is a fine-tuned version of [model_saves/deberta-large_spell_10k_1_p2](https://huggingface.co/model_saves/deberta-large_spell_10k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4534 - Accuracy: 0.9416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | No log | 1.0 | 268 | 0.9411 | 0.4240 | | No log | 2.0 | 536 | 0.9416 | 0.4534 | | No log | 3.0 | 804 | 0.9409 | 0.4793 | | 0.2492 | 4.0 | 1072 | 0.9403 | 0.5380 | | 0.2492 | 5.0 | 1340 | 0.9399 | 0.5923 | | 0.2492 | 6.0 | 1608 | 0.6552 | 0.9398 | | 0.2492 | 7.0 | 1876 | 0.7205 | 0.9386 | | 0.0701 | 8.0 | 2144 | 0.7646 | 0.9395 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_lemon-spell_5k_4_p3
stuartmesham
2022-10-24T16:04:50Z
7
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:03:53Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_lemon-spell_5k_4_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_lemon-spell_5k_4_p3 This model is a fine-tuned version of [model_saves/deberta-large_lemon-spell_5k_4_p2](https://huggingface.co/model_saves/deberta-large_lemon-spell_5k_4_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4154 - Accuracy: 0.9420 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 72 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4154 | 0.9420 | | No log | 2.0 | 536 | 0.4406 | 0.9410 | | No log | 3.0 | 804 | 0.4833 | 0.9407 | | 0.2535 | 4.0 | 1072 | 0.5352 | 0.9396 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_lemon-spell_5k_3_p3
stuartmesham
2022-10-24T16:03:50Z
6
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:02:54Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_lemon-spell_5k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_lemon-spell_5k_3_p3 This model is a fine-tuned version of [model_saves/deberta-large_lemon-spell_5k_3_p2](https://huggingface.co/model_saves/deberta-large_lemon-spell_5k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4165 - Accuracy: 0.9416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4165 | 0.9416 | | No log | 2.0 | 536 | 0.4361 | 0.9410 | | No log | 3.0 | 804 | 0.4829 | 0.9402 | | 0.256 | 4.0 | 1072 | 0.5374 | 0.9400 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_lemon-spell_5k_2_p3
stuartmesham
2022-10-24T16:02:51Z
6
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:01:53Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_lemon-spell_5k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_lemon-spell_5k_2_p3 This model is a fine-tuned version of [model_saves/deberta-large_lemon-spell_5k_2_p2](https://huggingface.co/model_saves/deberta-large_lemon-spell_5k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4300 - Accuracy: 0.9408 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4300 | 0.9408 | | No log | 2.0 | 536 | 0.4692 | 0.9397 | | No log | 3.0 | 804 | 0.5036 | 0.9393 | | 0.2201 | 4.0 | 1072 | 0.5705 | 0.9400 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_lemon-spell_5k_1_p3
stuartmesham
2022-10-24T16:01:51Z
6
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T16:00:51Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_lemon-spell_5k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_lemon-spell_5k_1_p3 This model is a fine-tuned version of [model_saves/deberta-large_lemon-spell_5k_1_p2](https://huggingface.co/model_saves/deberta-large_lemon-spell_5k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4165 - Accuracy: 0.9412 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4165 | 0.9412 | | No log | 2.0 | 536 | 0.4405 | 0.9411 | | No log | 3.0 | 804 | 0.4909 | 0.9407 | | 0.2552 | 4.0 | 1072 | 0.5289 | 0.9401 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_lemon-spell_10k_3_p3
stuartmesham
2022-10-24T16:00:48Z
6
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T15:59:26Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_lemon-spell_10k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_lemon-spell_10k_3_p3 This model is a fine-tuned version of [model_saves/deberta-large_lemon-spell_10k_3_p2](https://huggingface.co/model_saves/deberta-large_lemon-spell_10k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4269 - Accuracy: 0.9419 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4269 | 0.9419 | | No log | 2.0 | 536 | 0.4457 | 0.9414 | | No log | 3.0 | 804 | 0.4897 | 0.9407 | | 0.2514 | 4.0 | 1072 | 0.5445 | 0.9405 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_lemon-spell_10k_2_p3
stuartmesham
2022-10-24T15:59:23Z
8
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T15:58:23Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_lemon-spell_10k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_lemon-spell_10k_2_p3 This model is a fine-tuned version of [model_saves/deberta-large_lemon-spell_10k_2_p2](https://huggingface.co/model_saves/deberta-large_lemon-spell_10k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4281 - Accuracy: 0.9414 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4281 | 0.9414 | | No log | 2.0 | 536 | 0.4557 | 0.9402 | | No log | 3.0 | 804 | 0.4907 | 0.9399 | | 0.249 | 4.0 | 1072 | 0.5485 | 0.9403 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_lemon-spell_10k_1_p3
stuartmesham
2022-10-24T15:58:21Z
6
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T15:57:22Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_lemon-spell_10k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_lemon-spell_10k_1_p3 This model is a fine-tuned version of [model_saves/deberta-large_lemon-spell_10k_1_p2](https://huggingface.co/model_saves/deberta-large_lemon-spell_10k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4247 - Accuracy: 0.9413 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4247 | 0.9413 | | No log | 2.0 | 536 | 0.4512 | 0.9411 | | No log | 3.0 | 804 | 0.4965 | 0.9405 | | 0.2492 | 4.0 | 1072 | 0.5336 | 0.9404 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_lemon_5k_5_p3
stuartmesham
2022-10-24T15:56:21Z
7
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T15:55:20Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_lemon_5k_5_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_lemon_5k_5_p3 This model is a fine-tuned version of [model_saves/deberta-large_lemon_5k_5_p2](https://huggingface.co/model_saves/deberta-large_lemon_5k_5_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4370 - Accuracy: 0.9413 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 82 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4148 | 0.9411 | | No log | 2.0 | 536 | 0.4370 | 0.9413 | | No log | 3.0 | 804 | 0.4777 | 0.9408 | | 0.2552 | 4.0 | 1072 | 0.5178 | 0.9401 | | 0.2552 | 5.0 | 1340 | 0.5832 | 0.9399 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_lemon_5k_4_p3
stuartmesham
2022-10-24T15:55:17Z
7
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T15:54:19Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_lemon_5k_4_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_lemon_5k_4_p3 This model is a fine-tuned version of [model_saves/deberta-large_lemon_5k_4_p2](https://huggingface.co/model_saves/deberta-large_lemon_5k_4_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4267 - Accuracy: 0.9416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 72 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4267 | 0.9416 | | No log | 2.0 | 536 | 0.4596 | 0.9403 | | No log | 3.0 | 804 | 0.5083 | 0.9401 | | 0.2208 | 4.0 | 1072 | 0.5562 | 0.9394 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_lemon_5k_3_p3
stuartmesham
2022-10-24T15:54:16Z
6
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T15:53:19Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_lemon_5k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_lemon_5k_3_p3 This model is a fine-tuned version of [model_saves/deberta-large_lemon_5k_3_p2](https://huggingface.co/model_saves/deberta-large_lemon_5k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4162 - Accuracy: 0.9414 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4162 | 0.9414 | | No log | 2.0 | 536 | 0.4353 | 0.9412 | | No log | 3.0 | 804 | 0.4798 | 0.9402 | | 0.2573 | 4.0 | 1072 | 0.5360 | 0.9398 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_lemon_5k_1_p3
stuartmesham
2022-10-24T15:52:14Z
6
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T15:51:17Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_lemon_5k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_lemon_5k_1_p3 This model is a fine-tuned version of [model_saves/deberta-large_lemon_5k_1_p2](https://huggingface.co/model_saves/deberta-large_lemon_5k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4146 - Accuracy: 0.9413 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4146 | 0.9413 | | No log | 2.0 | 536 | 0.4394 | 0.9410 | | No log | 3.0 | 804 | 0.4904 | 0.9403 | | 0.2551 | 4.0 | 1072 | 0.5282 | 0.9403 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_lemon_10k_3_p3
stuartmesham
2022-10-24T15:51:14Z
6
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T15:50:15Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_lemon_10k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_lemon_10k_3_p3 This model is a fine-tuned version of [model_saves/deberta-large_lemon_10k_3_p2](https://huggingface.co/model_saves/deberta-large_lemon_10k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4268 - Accuracy: 0.9413 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4268 | 0.9413 | | No log | 2.0 | 536 | 0.4439 | 0.9411 | | No log | 3.0 | 804 | 0.4914 | 0.9401 | | 0.2514 | 4.0 | 1072 | 0.5406 | 0.9398 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_lemon_10k_2_p3
stuartmesham
2022-10-24T15:50:12Z
14
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T15:49:12Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_lemon_10k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_lemon_10k_2_p3 This model is a fine-tuned version of [model_saves/deberta-large_lemon_10k_2_p2](https://huggingface.co/model_saves/deberta-large_lemon_10k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4400 - Accuracy: 0.9402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4400 | 0.9402 | | No log | 2.0 | 536 | 0.4763 | 0.9395 | | No log | 3.0 | 804 | 0.5166 | 0.9386 | | 0.2171 | 4.0 | 1072 | 0.5735 | 0.9395 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_lemon_10k_1_p3
stuartmesham
2022-10-24T15:49:10Z
5
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T15:48:12Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_lemon_10k_1_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_lemon_10k_1_p3 This model is a fine-tuned version of [model_saves/deberta-large_lemon_10k_1_p2](https://huggingface.co/model_saves/deberta-large_lemon_10k_1_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4244 - Accuracy: 0.9413 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4244 | 0.9413 | | No log | 2.0 | 536 | 0.4490 | 0.9408 | | No log | 3.0 | 804 | 0.5007 | 0.9409 | | 0.249 | 4.0 | 1072 | 0.5361 | 0.9406 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_basetags_5k_4_p3
stuartmesham
2022-10-24T15:45:51Z
6
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T15:44:55Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_basetags_5k_4_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_basetags_5k_4_p3 This model is a fine-tuned version of [model_saves/deberta-large_basetags_5k_4_p2](https://huggingface.co/model_saves/deberta-large_basetags_5k_4_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4154 - Accuracy: 0.9414 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 72 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4154 | 0.9414 | | No log | 2.0 | 536 | 0.4354 | 0.9410 | | No log | 3.0 | 804 | 0.4763 | 0.9406 | | 0.2537 | 4.0 | 1072 | 0.5329 | 0.9406 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_basetags_5k_3_p3
stuartmesham
2022-10-24T15:44:52Z
6
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T15:43:56Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_basetags_5k_3_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_basetags_5k_3_p3 This model is a fine-tuned version of [model_saves/deberta-large_basetags_5k_3_p2](https://huggingface.co/model_saves/deberta-large_basetags_5k_3_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4160 - Accuracy: 0.9414 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 62 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4160 | 0.9414 | | No log | 2.0 | 536 | 0.4364 | 0.9403 | | No log | 3.0 | 804 | 0.4786 | 0.9398 | | 0.2537 | 4.0 | 1072 | 0.5255 | 0.9392 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
stuartmesham/deberta-large_basetags_5k_2_p3
stuartmesham
2022-10-24T15:43:53Z
6
0
transformers
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T15:42:40Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-large_basetags_5k_2_p3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large_basetags_5k_2_p3 This model is a fine-tuned version of [model_saves/deberta-large_basetags_5k_2_p2](https://huggingface.co/model_saves/deberta-large_basetags_5k_2_p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4131 - Accuracy: 0.9416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 52 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 268 | 0.4131 | 0.9416 | | No log | 2.0 | 536 | 0.4377 | 0.9414 | | No log | 3.0 | 804 | 0.4755 | 0.9404 | | 0.2528 | 4.0 | 1072 | 0.5314 | 0.9403 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
iwintory/ddpm-butterflies-128
iwintory
2022-10-24T15:33:36Z
0
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/smithsonian_butterflies_subset", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-10-24T14:45:50Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/iwintory/ddpm-butterflies-128/tensorboard?#scalars)
esb/conformer-rnnt-chime4
esb
2022-10-24T15:26:33Z
3
0
nemo
[ "nemo", "esb", "en", "dataset:esb/datasets", "dataset:ldc/chime-4", "region:us" ]
null
2022-10-24T15:26:18Z
--- language: - en tags: - esb datasets: - esb/datasets - ldc/chime-4 --- To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esb/datasets" \ --dataset_config_name="chime4" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --output_dir="./" \ --run_name="conformer-rnnt-chime4" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
esb/conformer-rnnt-ami
esb
2022-10-24T15:22:05Z
2
0
nemo
[ "nemo", "esb", "en", "dataset:esb/datasets", "dataset:edinburghcstr/ami", "region:us" ]
null
2022-10-24T15:21:51Z
--- language: - en tags: - esb datasets: - esb/datasets - edinburghcstr/ami --- To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esb/datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="ami" \ --output_dir="./" \ --run_name="conformer-rnnt-ami" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
esb/conformer-rnnt-earnings22
esb
2022-10-24T15:19:43Z
4
0
nemo
[ "nemo", "esb", "en", "dataset:esb/datasets", "dataset:revdotcom/earnings22", "region:us" ]
null
2022-10-24T15:19:28Z
--- language: - en tags: - esb datasets: - esb/datasets - revdotcom/earnings22 --- To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esb/datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="earnings22" \ --output_dir="./" \ --run_name="conformer-rnnt-earnings22" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
esb/conformer-rnnt-gigaspeech
esb
2022-10-24T15:15:20Z
4
0
nemo
[ "nemo", "esb", "en", "dataset:esb/datasets", "dataset:speechcolab/gigaspeech", "region:us" ]
null
2022-10-24T15:15:05Z
--- language: - en tags: - esb datasets: - esb/datasets - speechcolab/gigaspeech --- To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esb/datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --num_train_epochs="0.88" \ --dataset_config_name="gigaspeech" \ --output_dir="./" \ --run_name="conformer-rnnt-gigaspeech" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
esb/conformer-rnnt-voxpopuli
esb
2022-10-24T15:13:22Z
4
0
nemo
[ "nemo", "esb", "en", "dataset:esb/datasets", "dataset:facebook/voxpopuli", "region:us" ]
null
2022-10-24T15:13:07Z
--- language: - en tags: - esb datasets: - esb/datasets - facebook/voxpopuli --- To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esb/datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="voxpopuli" \ --output_dir="./" \ --run_name="conformer-rnnt-voxpopuli" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
esb/conformer-rnnt-common_voice
esb
2022-10-24T15:08:53Z
4
0
nemo
[ "nemo", "esb", "en", "dataset:esb/datasets", "dataset:mozilla-foundation/common_voice_9_0", "region:us" ]
null
2022-10-24T15:08:38Z
--- language: - en tags: - esb datasets: - esb/datasets - mozilla-foundation/common_voice_9_0 --- To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esb/datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="common_voice" \ --output_dir="./" \ --run_name="conformer-rnnt-common-voice" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --max_eval_duration_in_seconds="20" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
esb/conformer-rnnt-librispeech
esb
2022-10-24T15:05:56Z
4
0
nemo
[ "nemo", "esb", "en", "dataset:esb/datasets", "dataset:librispeech_asr", "region:us" ]
null
2022-10-24T15:05:41Z
--- language: - en tags: - esb datasets: - esb/datasets - librispeech_asr --- To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esb/datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="librispeech" \ --output_dir="./" \ --run_name="conformer-rnnt-librispeech" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
esb/whisper-aed-chime4
esb
2022-10-24T15:03:26Z
0
0
null
[ "esb", "en", "dataset:esb/datasets", "dataset:ldc/chime-4", "region:us" ]
null
2022-10-24T15:03:09Z
--- language: - en tags: - esb datasets: - esb/datasets - ldc/chime-4 --- To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper): ``` pip install git+https://github.com/openai/whisper.git ``` Then execute the command: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esb/datasets" \ --dataset_config_name="chime4" \ --max_steps="2500" \ --output_dir="./" \ --run_name="whisper-chime4" \ --dropout_rate="0.1" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="500" \ --save_strategy="steps" \ --save_steps="500" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
esb/whisper-aed-switchboard
esb
2022-10-24T15:01:09Z
0
1
null
[ "esb", "en", "dataset:esb/datasets", "dataset:ldc/switchboard", "region:us" ]
null
2022-10-24T15:00:52Z
--- language: - en tags: - esb datasets: - esb/datasets - ldc/switchboard --- To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper): ``` pip install git+https://github.com/openai/whisper.git ``` Then execute the command: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esb/datasets" \ --dataset_config_name="switchboard" \ --max_steps="5000" \ --output_dir="./" \ --run_name="whisper-switchboard" \ --max_steps="5000" \ --output_dir="./" \ --run_name="whisper-switchboard" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="1000" \ --save_strategy="steps" \ --save_steps="1000" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
esb/whisper-aed-ami
esb
2022-10-24T14:58:41Z
0
0
null
[ "esb", "en", "dataset:esb/datasets", "dataset:edinburghcstr/ami", "region:us" ]
null
2022-10-24T14:58:24Z
--- language: - en tags: - esb datasets: - esb/datasets - edinburghcstr/ami --- To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper): ``` pip install git+https://github.com/openai/whisper.git ``` Then execute the command: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esb/datasets" \ --dataset_config_name="ami" \ --max_steps="2500" \ --output_dir="./" \ --run_name="whisper-ami" \ --dropout_rate="0.1" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="500" \ --save_strategy="steps" \ --save_steps="500" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
esb/whisper-aed-earnings22
esb
2022-10-24T14:55:59Z
0
0
null
[ "esb", "en", "dataset:esb/datasets", "dataset:revdotcom/earnings22", "region:us" ]
null
2022-10-24T14:55:42Z
--- language: - en tags: - esb datasets: - esb/datasets - revdotcom/earnings22 --- To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper): ``` pip install git+https://github.com/openai/whisper.git ``` Then execute the command: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esb/datasets" \ --dataset_config_name="earnings22" \ --max_steps="2500" \ --output_dir="./" \ --run_name="whisper-earnings22" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="500" \ --save_strategy="steps" \ --save_steps="500" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
esb/whisper-aed-spgispeech
esb
2022-10-24T14:53:25Z
0
0
null
[ "esb", "en", "dataset:esb/datasets", "dataset:kensho/spgispeech", "region:us" ]
null
2022-10-24T14:53:08Z
--- language: - en tags: - esb datasets: - esb/datasets - kensho/spgispeech --- To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper): ``` pip install git+https://github.com/openai/whisper.git ``` Then execute the command: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esb/datasets" \ --dataset_config_name="spgispeech" \ --max_steps="5000" \ --output_dir="./" \ --run_name="whisper-spgispeech" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="1000" \ --save_strategy="steps" \ --save_steps="1000" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
esb/whisper-aed-gigaspeech
esb
2022-10-24T14:50:45Z
0
0
null
[ "esb", "en", "dataset:esb/datasets", "dataset:speechcolab/gigaspeech", "region:us" ]
null
2022-10-24T14:50:28Z
--- language: - en tags: - esb datasets: - esb/datasets - speechcolab/gigaspeech --- To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper): ``` pip install git+https://github.com/openai/whisper.git ``` Then execute the command: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esb/datasets" \ --dataset_config_name="gigaspeech" \ --max_steps="5000" \ --output_dir="./" \ --run_name="whisper-gigaspeech" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="1000" \ --save_strategy="steps" \ --save_steps="1000" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
esb/whisper-aed-voxpopuli
esb
2022-10-24T14:48:27Z
0
0
null
[ "esb", "en", "dataset:esb/datasets", "dataset:facebook/voxpopuli", "region:us" ]
null
2022-10-24T14:48:10Z
--- language: - en tags: - esb datasets: - esb/datasets - facebook/voxpopuli --- To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper): ``` pip install git+https://github.com/openai/whisper.git ``` Then execute the command: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esb/datasets" \ --dataset_config_name="voxpopuli" \ --max_steps="5000" \ --output_dir="./" \ --run_name="whisper-voxpopuli" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="500" \ --save_strategy="steps" \ --save_steps="500" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
esb/whisper-aed-tedlium
esb
2022-10-24T14:45:31Z
0
0
null
[ "esb", "en", "dataset:esb/datasets", "dataset:LIUM/tedlium", "region:us" ]
null
2022-10-24T14:45:14Z
--- language: - en tags: - esb datasets: - esb/datasets - LIUM/tedlium --- To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper): ``` pip install git+https://github.com/openai/whisper.git ``` Then execute the command: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esb/datasets" \ --dataset_config_name="tedlium" \ --max_steps="2500" \ --output_dir="./" \ --run_name="whisper-tedlium" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="500" \ --save_strategy="steps" \ --save_steps="500" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
esb/wav2vec2-aed-chime4
esb
2022-10-24T14:37:55Z
4
0
transformers
[ "transformers", "jax", "speech-encoder-decoder", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:ldc/chime-4", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-24T14:37:41Z
--- language: - en tags: - esb datasets: - esb/datasets - ldc/chime-4 --- To reproduce this run, execute: ```python #!/usr/bin/env bash python run_flax_speech_recognition_seq2seq.py \ --dataset_name="esb/datasets" \ --model_name_or_path="esb/wav2vec2-aed-pretrained" \ --dataset_config_name="chime4" \ --output_dir="./" \ --wandb_name="wav2vec2-aed-chime4" \ --wandb_project="wav2vec2-aed" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="25" \ --max_steps="50001" \ --eval_steps="10000" \ --save_steps="10000" \ --generation_max_length="40" \ --generation_num_beams="1" \ --final_generation_max_length="250" \ --final_generation_num_beams="5" \ --generation_length_penalty="0.6" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --predict_with_generate \ --do_eval \ --do_train \ --do_predict \ --push_to_hub \ --use_auth_token ```