modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
Sayan01/tiny-bert-sst2-distilled
0e5e7ce792c578721f7dfe3ed7f27f039f2b8466
2022-07-14T18:00:24.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Sayan01
null
Sayan01/tiny-bert-sst2-distilled
9
null
transformers
12,700
Entry not found
sijunhe/nezha-cn-large
3e9f8a4c096171b26d40feec300c6c4ae17307a5
2022-06-24T03:54:28.000Z
[ "pytorch", "nezha", "fill-mask", "arxiv:1909.00204", "transformers", "license:afl-3.0", "autotrain_compatible" ]
fill-mask
false
sijunhe
null
sijunhe/nezha-cn-large
9
null
transformers
12,701
--- license: afl-3.0 --- **Please use 'Bert' related tokenizer classes and 'Nezha' related model classes** [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. The original checkpoints can be found [here](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-PyTorch) ## Example Usage ``` from transformers import BertTokenizer, NezhaModel tokenizer = BertTokenizer.from_pretrained('sijunhe/nezha-cn-large') model = NezhaModel.from_pretrained("sijunhe/nezha-cn-large") text = "我爱北京天安门" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
davidcechak/DNADeberta_fine
19f38b2f454a66262b78d0ecb65768070589e7d3
2022-06-21T16:34:25.000Z
[ "pytorch", "deberta", "text-classification", "transformers" ]
text-classification
false
davidcechak
null
davidcechak/DNADeberta_fine
9
null
transformers
12,702
Entry not found
davidcechak/DNADeberta_fine_0.6394267984578837
b9eca035f53aa801ec16167e71c15f61e3633eda
2022-06-20T21:41:34.000Z
[ "pytorch", "deberta", "text-classification", "transformers" ]
text-classification
false
davidcechak
null
davidcechak/DNADeberta_fine_0.6394267984578837
9
null
transformers
12,703
Entry not found
fouad-shammary/distilbert-base-uncased-finetuned-emotion
68be303a57e3c4210b5bac8b4babd55f7bcdb457
2022-06-22T12:48:04.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
fouad-shammary
null
fouad-shammary/distilbert-base-uncased-finetuned-emotion
9
1
transformers
12,704
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9165 - name: F1 type: f1 value: 0.9164107076814402 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2349 - Accuracy: 0.9165 - F1: 0.9164 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.837 | 1.0 | 250 | 0.3317 | 0.9015 | 0.8999 | | 0.2563 | 2.0 | 500 | 0.2349 | 0.9165 | 0.9164 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
mirikwa/gro-ner-60k
0a7099ab9fd223fc8609db841501380adc897bd8
2022-06-21T07:49:29.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
mirikwa
null
mirikwa/gro-ner-60k
9
null
transformers
12,705
Entry not found
furyhawk/distilbert-base-uncased-finetuned-clinc
829948c911f0e27a645b2a049b73c1f2ed175bea
2022-06-21T09:36:29.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:clinc_oos", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
furyhawk
null
furyhawk/distilbert-base-uncased-finetuned-clinc
9
null
transformers
12,706
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.915483870967742 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7788 - Accuracy: 0.9155 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2841 | 1.0 | 318 | 3.2794 | 0.7465 | | 2.623 | 2.0 | 636 | 1.8719 | 0.8335 | | 1.5474 | 3.0 | 954 | 1.1629 | 0.8929 | | 1.014 | 4.0 | 1272 | 0.8621 | 0.9094 | | 0.7987 | 5.0 | 1590 | 0.7788 | 0.9155 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
nvidia/groupvit-gcc-redcaps
9219591bc69463a890dd366c450ed32c6b0c2573
2022-07-08T15:24:15.000Z
[ "pytorch", "groupvit", "feature-extraction", "dataset:red_caps", "arxiv:2202.11094", "transformers", "vision" ]
feature-extraction
false
nvidia
null
nvidia/groupvit-gcc-redcaps
9
1
transformers
12,707
--- tags: - vision datasets: - red_caps --- # Model Card: GroupViT This checkpoint is uploaded by Jiarui Xu. ## Model Details The GroupViT model was proposed in [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. Inspired by [CLIP](clip), GroupViT is a vision-language model that can perform zero-shot semantic segmentation on any given vocabulary categories. ### Model Date June 2022 ### Abstract Grouping and recognition are important components of visual scene understanding, e.g., for object detection and semantic segmentation. With end-to-end deep learning systems, grouping of image regions usually happens implicitly via top-down supervision from pixel-level recognition labels. Instead, in this paper, we propose to bring back the grouping mechanism into deep networks, which allows semantic segments to emerge automatically with only text supervision. We propose a hierarchical Grouping Vision Transformer (GroupViT), which goes beyond the regular grid structure representation and learns to group image regions into progressively larger arbitrary-shaped segments. We train GroupViT jointly with a text encoder on a large-scale image-text dataset via contrastive losses. With only text supervision and without any pixel-level annotations, GroupViT learns to group together semantic regions and successfully transfers to the task of semantic segmentation in a zero-shot manner, i.e., without any further fine-tuning. It achieves a zero-shot accuracy of 52.3% mIoU on the PASCAL VOC 2012 and 22.4% mIoU on PASCAL Context datasets, and performs competitively to state-of-the-art transfer-learning methods requiring greater levels of supervision. ### Documents - [GroupViT Paper](https://arxiv.org/abs/2202.11094) ### Use with Transformers ```python from PIL import Image import requests from transformers import AutoProcessor, GroupViTModel processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-redcaps") model = GroupViTModel.from_pretrained("nvidia/groupvit-gcc-redcaps") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities ``` ## Data The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users. For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/groupvit.html#). ### BibTeX entry and citation info ```bibtex @article{xu2022groupvit, author = {Xu, Jiarui and De Mello, Shalini and Liu, Sifei and Byeon, Wonmin and Breuel, Thomas and Kautz, Jan and Wang, Xiaolong}, title = {GroupViT: Semantic Segmentation Emerges from Text Supervision}, journal = {arXiv preprint arXiv:2202.11094}, year = {2022}, } ```
Jeevesh8/std_0pnt2_bert_ft_cola-39
cce06f2f3f9fb04da7bab200d993c8e3764b14fe
2022-06-21T13:28:01.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-39
9
null
transformers
12,708
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-29
b3bb3417ff81a6bdfd583cd86ee2c32e642aa905
2022-06-21T13:28:09.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-29
9
null
transformers
12,709
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-42
2f8a806ca3bb11e9805d4823171078aa856786c0
2022-06-21T13:28:08.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-42
9
null
transformers
12,710
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-38
3027562176929c10dcd835b26382c285d51a1327
2022-06-21T13:27:43.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-38
9
null
transformers
12,711
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-32
1741bf78d18f960658088a55fa6ff97715a7c62e
2022-06-21T13:27:45.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-32
9
null
transformers
12,712
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-19
844beec17d08352fd410c7a9f46c663c009fd541
2022-06-21T13:30:22.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-19
9
null
transformers
12,713
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-31
56acbfa688f5921c9eda7eb048e93a58f1a23bac
2022-06-21T13:27:50.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-31
9
null
transformers
12,714
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-13
2588d3a9e92d0a3af33907660b414e46893db09d
2022-06-21T13:28:17.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-13
9
null
transformers
12,715
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-11
f76d2e94a3be71499fd78145fa168c913faecb1d
2022-06-21T13:27:46.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-11
9
null
transformers
12,716
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-61
41a451fe047da19440d25812667eb3bcca3c1ac0
2022-06-21T13:30:24.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-61
9
null
transformers
12,717
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-10
d2a63839ce5d35166ccd0bdc7f240fcf25a0b5ee
2022-06-21T13:27:56.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-10
9
null
transformers
12,718
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-47
29b7a39c730ec89274c986aacdd66048d4743f68
2022-06-21T13:33:50.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-47
9
null
transformers
12,719
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-63
cb11b6a5ec4d13ccb79a2c0f10775f6dbf893ccd
2022-06-21T13:28:43.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-63
9
null
transformers
12,720
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-64
06e44c7445c6231e745ce9fd0d21c3a2678fda4a
2022-06-21T13:28:37.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-64
9
null
transformers
12,721
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-57
c1a048d55a2435ee3dfc903f0f2d86e5e8648f1d
2022-06-21T13:28:02.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-57
9
null
transformers
12,722
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-23
304708b61df4bb0d34f4d1e3cd7d1ef41e1f929a
2022-06-21T13:30:01.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-23
9
null
transformers
12,723
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-59
4e0041724760f0fbb713b45f182e8e8c27f3a944
2022-06-21T13:28:07.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-59
9
null
transformers
12,724
Entry not found
Jeevesh8/std_0pnt2_bert_ft_cola-34
64c536c3b26ae463679ba48cdf67fe3097e1513a
2022-06-21T13:27:53.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_0pnt2_bert_ft_cola-34
9
null
transformers
12,725
Entry not found
kktoto/tiny_focal_v2_label
74525b170ad9daafa6713dcbaf8e14e9af3d4bfe
2022-06-22T05:55:32.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
token-classification
false
kktoto
null
kktoto/tiny_focal_v2_label
9
null
transformers
12,726
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: tiny_focal_v2_label results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny_focal_v2_label This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0558 - Precision: 0.6979 - Recall: 0.6747 - F1: 0.6861 - Accuracy: 0.9513 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0661 | 1.0 | 5561 | 0.0616 | 0.6850 | 0.6202 | 0.6510 | 0.9457 | | 0.0613 | 2.0 | 11122 | 0.0587 | 0.6952 | 0.6351 | 0.6638 | 0.9480 | | 0.0596 | 3.0 | 16683 | 0.0577 | 0.6814 | 0.6679 | 0.6746 | 0.9485 | | 0.0555 | 4.0 | 22244 | 0.0567 | 0.6855 | 0.6693 | 0.6773 | 0.9492 | | 0.0543 | 5.0 | 27805 | 0.0560 | 0.6966 | 0.6657 | 0.6808 | 0.9503 | | 0.0529 | 6.0 | 33366 | 0.0558 | 0.7060 | 0.6587 | 0.6816 | 0.9510 | | 0.052 | 7.0 | 38927 | 0.0552 | 0.7009 | 0.6662 | 0.6831 | 0.9510 | | 0.0506 | 8.0 | 44488 | 0.0559 | 0.6921 | 0.6783 | 0.6852 | 0.9508 | | 0.0501 | 9.0 | 50049 | 0.0556 | 0.6991 | 0.6716 | 0.6851 | 0.9512 | | 0.0491 | 10.0 | 55610 | 0.0558 | 0.6979 | 0.6747 | 0.6861 | 0.9513 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Elron/deberta-v3-large-emotion
cd8f99963e4e4b1c23cdf07da9d61bcd7f7aacba
2022-06-22T09:48:01.000Z
[ "pytorch", "deberta-v2", "text-classification", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
Elron
null
Elron/deberta-v3-large-emotion
9
null
transformers
12,727
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large results: [] --- # deberta-v3-large-sentiment This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an [tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset. ## Model description Test set results: | Model | Emotion | Hate | Irony | Offensive | Sentiment | | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | | deberta-v3-large | **86.3** | **61.3** | **87.1** | **86.4** | **73.9** | | BERTweet | 79.3 | - | 82.1 | 79.5 | 73.4 | | RoB-RT | 79.5 | 52.3 | 61.7 | 80.5 | 69.3 | [source:papers_with_code](https://paperswithcode.com/sota/sentiment-analysis-on-tweeteval) ## Intended uses & limitations Classifying attributes of interest on tweeter like data. ## Training and evaluation data [tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset. ## Training procedure Fine tuned and evaluated with [run_glue.py]() ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 10.0 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2787 | 0.49 | 100 | 1.1127 | 0.4866 | | 1.089 | 0.98 | 200 | 0.9668 | 0.7139 | | 0.9134 | 1.47 | 300 | 0.8720 | 0.7834 | | 0.8618 | 1.96 | 400 | 0.7726 | 0.7941 | | 0.686 | 2.45 | 500 | 0.7337 | 0.8209 | | 0.6333 | 2.94 | 600 | 0.7350 | 0.8235 | | 0.5765 | 3.43 | 700 | 0.7561 | 0.8235 | | 0.5502 | 3.92 | 800 | 0.7273 | 0.8476 | | 0.5049 | 4.41 | 900 | 0.8137 | 0.8102 | | 0.4695 | 4.9 | 1000 | 0.7581 | 0.8289 | | 0.4657 | 5.39 | 1100 | 0.8404 | 0.8048 | | 0.4549 | 5.88 | 1200 | 0.7800 | 0.8369 | | 0.4305 | 6.37 | 1300 | 0.8575 | 0.8235 | | 0.4209 | 6.86 | 1400 | 0.8572 | 0.8102 | | 0.3983 | 7.35 | 1500 | 0.8392 | 0.8316 | | 0.4139 | 7.84 | 1600 | 0.8152 | 0.8209 | | 0.393 | 8.33 | 1700 | 0.8261 | 0.8289 | | 0.3979 | 8.82 | 1800 | 0.8328 | 0.8235 | | 0.3928 | 9.31 | 1900 | 0.8364 | 0.8209 | | 0.3848 | 9.8 | 2000 | 0.8322 | 0.8235 | ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.9.0 - Datasets 2.2.2 - Tokenizers 0.11.6
robingeibel/reformer-finetuned-big_patent
c89fb7aad341ecb5d8e4ef3d377a6f9bcb32cf2d
2022-06-27T08:12:43.000Z
[ "pytorch", "tensorboard", "reformer", "fill-mask", "dataset:big_patent", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
fill-mask
false
robingeibel
null
robingeibel/reformer-finetuned-big_patent
9
null
transformers
12,728
--- tags: - generated_from_trainer datasets: - big_patent model-index: - name: reformer-finetuned-big_patent results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reformer-finetuned-big_patent This model is a fine-tuned version of [robingeibel/reformer-finetuned-big_patent](https://huggingface.co/robingeibel/reformer-finetuned-big_patent) on the big_patent dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.0 | 1.0 | 5973 | 0.0000 | | 0.0 | 2.0 | 11946 | 0.0000 | | 0.0 | 3.0 | 17919 | 0.0000 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
shaneweisz/DialoGPT-finetuned-multiCONAN
0dca5445a6c982d63e9ceeecbb0963709e358da8
2022-07-12T14:56:19.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
shaneweisz
null
shaneweisz/DialoGPT-finetuned-multiCONAN
9
null
transformers
12,729
Entry not found
kullackaan/sentiment-tweets
d84ee721fde8373fbf66e00e87b6e7ce44bce4c3
2022-06-22T22:07:54.000Z
[ "pytorch", "bert", "text-classification", "transformers", "license:afl-3.0" ]
text-classification
false
kullackaan
null
kullackaan/sentiment-tweets
9
null
transformers
12,730
--- license: afl-3.0 ---
doraemon1998/distilroberta-base-finetuned-wikitext2
dab306515d08bd0310c5f6d773437a5419335023
2022-06-24T08:28:10.000Z
[ "pytorch", "tensorboard", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
doraemon1998
null
doraemon1998/distilroberta-base-finetuned-wikitext2
9
null
transformers
12,731
Entry not found
kktoto/tiny_focal_v3
fa1d8738e47cb7886bcb7cbdc1b0930daea84625
2022-06-25T08:54:15.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
token-classification
false
kktoto
null
kktoto/tiny_focal_v3
9
null
transformers
12,732
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: tiny_focal_v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny_focal_v3 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0023 - Precision: 0.6975 - Recall: 0.6822 - F1: 0.6898 - Accuracy: 0.9515 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.004 | 1.0 | 5561 | 0.0032 | 0.6900 | 0.6102 | 0.6477 | 0.9454 | | 0.0032 | 2.0 | 11122 | 0.0028 | 0.6901 | 0.6406 | 0.6644 | 0.9477 | | 0.0029 | 3.0 | 16683 | 0.0026 | 0.6956 | 0.6509 | 0.6725 | 0.9490 | | 0.0025 | 4.0 | 22244 | 0.0025 | 0.6838 | 0.6764 | 0.6801 | 0.9493 | | 0.0024 | 5.0 | 27805 | 0.0024 | 0.6954 | 0.6715 | 0.6832 | 0.9504 | | 0.0023 | 6.0 | 33366 | 0.0024 | 0.7125 | 0.6524 | 0.6811 | 0.9512 | | 0.0021 | 7.0 | 38927 | 0.0023 | 0.6999 | 0.6748 | 0.6872 | 0.9514 | | 0.0019 | 8.0 | 44488 | 0.0024 | 0.6962 | 0.6820 | 0.6890 | 0.9513 | | 0.0019 | 9.0 | 50049 | 0.0023 | 0.7005 | 0.6775 | 0.6888 | 0.9516 | | 0.0018 | 10.0 | 55610 | 0.0023 | 0.6975 | 0.6822 | 0.6898 | 0.9515 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
kktoto/tiny_focal_alpah
1cd569c0595f9a5eb5c593efb974ab3405262a83
2022-06-27T13:47:19.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
token-classification
false
kktoto
null
kktoto/tiny_focal_alpah
9
null
transformers
12,733
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: tiny_focal_alpah results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny_focal_alpah This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0492 - Precision: 0.6951 - Recall: 0.6796 - F1: 0.6873 - Accuracy: 0.9512 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0588 | 1.0 | 5561 | 0.0548 | 0.6801 | 0.6235 | 0.6506 | 0.9453 | | 0.054 | 2.0 | 11122 | 0.0521 | 0.6850 | 0.6478 | 0.6659 | 0.9476 | | 0.0525 | 3.0 | 16683 | 0.0509 | 0.6834 | 0.6676 | 0.6754 | 0.9486 | | 0.0492 | 4.0 | 22244 | 0.0503 | 0.6829 | 0.6754 | 0.6791 | 0.9491 | | 0.0482 | 5.0 | 27805 | 0.0500 | 0.6917 | 0.6727 | 0.6820 | 0.9501 | | 0.0471 | 6.0 | 33366 | 0.0491 | 0.7085 | 0.6546 | 0.6805 | 0.9510 | | 0.0459 | 7.0 | 38927 | 0.0486 | 0.6964 | 0.6746 | 0.6853 | 0.9510 | | 0.0448 | 8.0 | 44488 | 0.0495 | 0.6922 | 0.6813 | 0.6867 | 0.9509 | | 0.044 | 9.0 | 50049 | 0.0491 | 0.6961 | 0.6755 | 0.6857 | 0.9511 | | 0.0433 | 10.0 | 55610 | 0.0492 | 0.6951 | 0.6796 | 0.6873 | 0.9512 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
ryo0634/bert-base-zip-dependency-encoder-en-0
3ae715646427cbcda54cb2f7e0c86022c61e6e57
2022-06-29T03:44:45.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
ryo0634
null
ryo0634/bert-base-zip-dependency-encoder-en-0
9
null
transformers
12,734
Entry not found
Jour/tiny-m2m100-test
42e494f1316eb2cac8a51fb601ddec42d8e93a02
2022-06-29T10:22:01.000Z
[ "pytorch", "m2m_100", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Jour
null
Jour/tiny-m2m100-test
9
null
transformers
12,735
Entry not found
Jeevesh8/goog_bert_ft_cola-1
c57c0cc6e7e8a623f0c0138573f4082a8ed75f95
2022-06-29T17:31:50.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-1
9
null
transformers
12,736
Entry not found
Jeevesh8/goog_bert_ft_cola-31
1aa30e0accad9595bf1688b33b77b6d6d456c5d3
2022-06-29T17:34:07.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-31
9
null
transformers
12,737
Entry not found
Jeevesh8/goog_bert_ft_cola-33
4f39ac4411a7c6ba1e4c24a989c05955e133ed4d
2022-06-29T17:34:18.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-33
9
null
transformers
12,738
Entry not found
Jeevesh8/goog_bert_ft_cola-35
56c899778e58c31df5b668152a16ee064fbb5af7
2022-06-29T17:33:52.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-35
9
null
transformers
12,739
Entry not found
Jeevesh8/goog_bert_ft_cola-46
773b768794dfc49f8dd4a49371ee7710ef53d071
2022-06-29T17:34:03.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-46
9
null
transformers
12,740
Entry not found
Jeevesh8/goog_bert_ft_cola-68
f1167bc00f2f6044372d6893bab132e66be69eff
2022-06-29T17:33:41.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-68
9
null
transformers
12,741
Entry not found
Jeevesh8/goog_bert_ft_cola-60
ed202e99b8ba0a1c59b224d61833a9f4aa626ce5
2022-06-29T17:34:08.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-60
9
null
transformers
12,742
Entry not found
Jeevesh8/goog_bert_ft_cola-74
b3aaf40793cc4c26b10cea1838957585a34e90a6
2022-06-29T17:35:23.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-74
9
null
transformers
12,743
Entry not found
Jeevesh8/goog_bert_ft_cola-52
60b14cbe3bc54cf9caf3d690a0ecd62d32361dbb
2022-06-29T17:34:23.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-52
9
null
transformers
12,744
Entry not found
Jeevesh8/goog_bert_ft_cola-48
52b50c825ceff43562ce1c5fcf3b71b3b2262ee6
2022-06-29T17:34:51.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-48
9
null
transformers
12,745
Entry not found
Jeevesh8/goog_bert_ft_cola-56
9908542549ee23041355df4c50003ce7669da093
2022-06-29T17:34:21.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-56
9
null
transformers
12,746
Entry not found
Jeevesh8/goog_bert_ft_cola-64
3088598358942a59903410a50d38d58606bb1e52
2022-06-29T17:35:33.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-64
9
null
transformers
12,747
Entry not found
Jeevesh8/goog_bert_ft_cola-58
093267db65e51d760f7ff3cbcd6b640b9e16312e
2022-06-29T17:35:25.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-58
9
null
transformers
12,748
Entry not found
Jeevesh8/goog_bert_ft_cola-77
2501352ae15a26ed3a071e47e7fe88fceda490fc
2022-06-29T17:34:04.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-77
9
null
transformers
12,749
Entry not found
Jeevesh8/goog_bert_ft_cola-82
e516e4dc81f5c5fee869cebdf18a69fd6da5a247
2022-06-29T17:33:50.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-82
9
null
transformers
12,750
Entry not found
Jeevesh8/goog_bert_ft_cola-81
4f5a1df14c6086cbb465c75a767b85f4e6815656
2022-06-29T17:33:48.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-81
9
null
transformers
12,751
Entry not found
Jeevesh8/goog_bert_ft_cola-83
e9d872809b8cb883adbdc72c2f90e3ac4f0b0181
2022-06-29T17:33:59.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-83
9
null
transformers
12,752
Entry not found
Jeevesh8/goog_bert_ft_cola-78
fc67d07c3d7bd0b2db85b0d15f6b46427da1da37
2022-06-29T17:34:04.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-78
9
null
transformers
12,753
Entry not found
Jeevesh8/goog_bert_ft_cola-29
f0a58f94a0c20853974e0ebfe52293a1d14e5501
2022-06-29T17:48:29.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/goog_bert_ft_cola-29
9
null
transformers
12,754
Entry not found
sexomq/TeoBot-Romanian-medium2
9e8a595c450796ae2bbf24c8446ed5f17272f663
2022-06-29T21:27:42.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
sexomq
null
sexomq/TeoBot-Romanian-medium2
9
null
transformers
12,755
--- tags: - conversational ---
TheDiamondKing/Discord-Philosophy-Medium
98d419fdc66e7f706df36f4c5270b47b3d6f299c
2022-06-29T21:26:01.000Z
[ "pytorch", "roberta", "fill-mask", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
TheDiamondKing
null
TheDiamondKing/Discord-Philosophy-Medium
9
null
transformers
12,756
--- license: mit --- Medium-Sized model trained with philosophical questions ( mainly from discord ) ~11000 Messages
tbasic5/distilbert-base-uncased-finetuned-emotion
0910f2b949726dce5ea0dbf69560fc325f050a8c
2022-06-29T22:21:00.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
tbasic5
null
tbasic5/distilbert-base-uncased-finetuned-emotion
9
null
transformers
12,757
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.925 - name: F1 type: f1 value: 0.925022224520608 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2222 - Accuracy: 0.925 - F1: 0.9250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8521 | 1.0 | 250 | 0.3164 | 0.907 | 0.9038 | | 0.2549 | 2.0 | 500 | 0.2222 | 0.925 | 0.9250 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
mhaegeman/autotrain-country-recognition-1059336697
ccff26a442dc746dd0a7bab024d5fd16d0202a2b
2022-06-30T08:35:28.000Z
[ "pytorch", "distilbert", "text-classification", "en", "dataset:mhaegeman/autotrain-data-country-recognition", "transformers", "autotrain", "co2_eq_emissions" ]
text-classification
false
mhaegeman
null
mhaegeman/autotrain-country-recognition-1059336697
9
null
transformers
12,758
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - mhaegeman/autotrain-data-country-recognition co2_eq_emissions: 0.02952188223491361 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1059336697 - CO2 Emissions (in grams): 0.02952188223491361 ## Validation Metrics - Loss: 0.06108148396015167 - Accuracy: 0.9879569162920872 - Macro F1: 0.9765004449554612 - Micro F1: 0.9879569162920872 - Weighted F1: 0.9879450113590053 - Macro Precision: 0.9784321161207384 - Micro Precision: 0.9879569162920872 - Weighted Precision: 0.9880404765946114 - Macro Recall: 0.9748417542427885 - Micro Recall: 0.9879569162920872 - Weighted Recall: 0.9879569162920872 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/mhaegeman/autotrain-country-recognition-1059336697 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("mhaegeman/autotrain-country-recognition-1059336697", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("mhaegeman/autotrain-country-recognition-1059336697", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
pfrdn/wav2vec2-large-xls-r-300m-turkish-colab
8fa1f7da0b36235833ab4a9bd25a69a1b46deb08
2022-07-04T16:19:39.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
pfrdn
null
pfrdn/wav2vec2-large-xls-r-300m-turkish-colab
9
null
transformers
12,759
Entry not found
gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53
77e29829170655fb4486f9233702675be214e4d7
2022-07-07T09:10:42.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "gary109/AI_Light_Dance", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
gary109
null
gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53
9
null
transformers
12,760
--- license: apache-2.0 tags: - automatic-speech-recognition - gary109/AI_Light_Dance - generated_from_trainer model-index: - name: ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53 This model is a fine-tuned version of [gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53](https://huggingface.co/gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING3 dataset. It achieves the following results on the evaluation set: - Loss: 0.8797 - Wer: 0.5513 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.9613 | 1.0 | 2309 | 1.0171 | 0.7271 | | 0.8254 | 2.0 | 4618 | 0.9771 | 0.6650 | | 0.7406 | 3.0 | 6927 | 0.9174 | 0.6420 | | 0.74 | 4.0 | 9236 | 0.9551 | 0.6371 | | 0.5855 | 5.0 | 11545 | 0.9262 | 0.6453 | | 0.5536 | 6.0 | 13854 | 0.9056 | 0.5894 | | 0.505 | 7.0 | 16163 | 0.9166 | 0.6029 | | 0.449 | 8.0 | 18472 | 0.8816 | 0.5873 | | 0.4219 | 9.0 | 20781 | 0.8970 | 0.5589 | | 0.5764 | 10.0 | 23090 | 0.9189 | 0.5649 | | 0.5075 | 11.0 | 25399 | 0.8797 | 0.5513 | | 0.4366 | 12.0 | 27708 | 0.9011 | 0.5567 | | 0.4915 | 13.0 | 30017 | 0.9248 | 0.5455 | | 0.3554 | 14.0 | 32326 | 0.9309 | 0.5374 | | 0.3975 | 15.0 | 34635 | 0.9103 | 0.5259 | | 0.4119 | 16.0 | 36944 | 0.9402 | 0.5290 | | 0.267 | 17.0 | 39253 | 0.9479 | 0.5115 | | 0.3107 | 18.0 | 41562 | 0.9428 | 0.5099 | | 0.2684 | 19.0 | 43871 | 0.9508 | 0.5133 | | 0.2125 | 20.0 | 46180 | 0.9737 | 0.5097 | | 0.3149 | 21.0 | 48489 | 0.9992 | 0.5095 | | 0.2313 | 22.0 | 50798 | 1.0037 | 0.5059 | | 0.2674 | 23.0 | 53107 | 1.0091 | 0.5040 | | 0.2056 | 24.0 | 55416 | 1.0082 | 0.5076 | | 0.2781 | 25.0 | 57725 | 1.0160 | 0.5015 | | 0.2005 | 26.0 | 60034 | 1.0390 | 0.5131 | | 0.2221 | 27.0 | 62343 | 1.0401 | 0.5074 | | 0.1857 | 28.0 | 64652 | 1.0484 | 0.5096 | | 0.1562 | 29.0 | 66961 | 1.0516 | 0.5064 | | 0.3027 | 30.0 | 69270 | 1.0543 | 0.5049 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 2.3.3.dev0 - Tokenizers 0.12.1
Sayan01/tiny-bert-stsb-distilled
4acbd7bab4a61c068f35a4479930aa4b17acbd76
2022-07-07T15:16:15.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Sayan01
null
Sayan01/tiny-bert-stsb-distilled
9
null
transformers
12,761
Entry not found
Pro0100Hy6/test_trainer
04a8bd9c9a6e3e26434d4a74e48e14fea3dcf58c
2022-07-03T17:45:59.000Z
[ "pytorch", "bert", "text-classification", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
Pro0100Hy6
null
Pro0100Hy6/test_trainer
9
null
transformers
12,762
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: test_trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer This model is a fine-tuned version of [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7773 - Accuracy: 0.6375 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7753 | 1.0 | 400 | 0.7773 | 0.6375 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Vinz9899/dumy-model
63ea229ecbce68cf6edb1543396471dc7912aa90
2022-07-03T18:03:43.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
Vinz9899
null
Vinz9899/dumy-model
9
null
transformers
12,763
Entry not found
sijunhe/tiny_roformer_v2_test
df60da6caa4f5283bb9c349c384f338e47671f89
2022-07-09T15:48:11.000Z
[ "pytorch", "roformer", "fill-mask", "zh", "transformers", "roformer-v2", "autotrain_compatible" ]
fill-mask
false
sijunhe
null
sijunhe/tiny_roformer_v2_test
9
null
transformers
12,764
--- language: zh tags: - roformer-v2 - pytorch inference: False --- this is a test model of RoFormer V2
LACAI/roberta-large-PFG-progression
04ce85bf2215067857c10a35b30c662251ecc07b
2022-07-04T19:16:52.000Z
[ "pytorch", "tensorboard", "roberta", "text-classification", "transformers", "license:mit" ]
text-classification
false
LACAI
null
LACAI/roberta-large-PFG-progression
9
null
transformers
12,765
--- license: mit --- Base model: [roberta-large](https://huggingface.co/roberta-large) Fine tuned as a progression model (to predict the acceptability of a dialogue) on the [Persuasion For Good Dataset](https://gitlab.com/ucdavisnlp/persuasionforgood) (Wang et al., 2019): Given a complete dialogue from (or in the style of) Persuasion For Good, the task is to predict a numeric score typically in the range (-3, 3) where a higher score means a more acceptable dialogue in context of the donation solicitation task. **Example input**: `How are you?</s>Good! how about yourself?</s>Great. Would you like to donate today to help the children?</s>` For more context and usage information see [https://github.rpi.edu/LACAI/dialogue-progression](https://github.rpi.edu/LACAI/dialogue-progression).
Eleven/xlm-roberta-base-finetuned-panx-de
1f281bfee00cb72e6e15fc0ad6f1fbdf4cb4f6e4
2022-07-05T15:21:55.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
Eleven
null
Eleven/xlm-roberta-base-finetuned-panx-de
9
null
transformers
12,766
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8591509380490846 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1377 - F1: 0.8592 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2792 | 1.0 | 525 | 0.1578 | 0.8129 | | 0.1279 | 2.0 | 1050 | 0.1420 | 0.8439 | | 0.0836 | 3.0 | 1575 | 0.1377 | 0.8592 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
aihub007/convnext-tiny-224-finetuned-eurosat-albumentations
d1a672d1cd6adebbece54277f0e42ab01ad6fca8
2022-07-06T03:02:41.000Z
[ "pytorch", "tensorboard", "convnext", "image-classification", "dataset:image_folder", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
image-classification
false
aihub007
null
aihub007/convnext-tiny-224-finetuned-eurosat-albumentations
9
null
transformers
12,767
--- license: apache-2.0 tags: - generated_from_trainer datasets: - image_folder metrics: - accuracy model-index: - name: convnext-tiny-224-finetuned-eurosat-albumentations results: - task: name: Image Classification type: image-classification dataset: name: image_folder type: image_folder args: default metrics: - name: Accuracy type: accuracy value: 0.9803703703703703 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-tiny-224-finetuned-eurosat-albumentations This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.0886 - Accuracy: 0.9804 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3879 | 1.0 | 95 | 0.2927 | 0.9567 | | 0.1095 | 2.0 | 190 | 0.1102 | 0.9759 | | 0.0911 | 3.0 | 285 | 0.0886 | 0.9804 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
ultra-coder54732/roberta-base-prop-16-train-set
e21145d7ef937c6e9cb4aba1734aab2613c226ba
2022-07-21T06:54:31.000Z
[ "pytorch", "tensorboard", "roberta", "text-classification", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
ultra-coder54732
null
ultra-coder54732/roberta-base-prop-16-train-set
9
null
transformers
12,768
--- license: mit tags: - generated_from_trainer model-index: - name: roberta-base-prop-16-train-set results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-prop-16-train-set This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cpu - Datasets 2.3.2 - Tokenizers 0.12.1
f00d/distilgpt2-finetuned-wikitext2
dd6543ae02c11981476aaa9ddafd16f3a7073e73
2022-07-06T09:31:58.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-generation
false
f00d
null
f00d/distilgpt2-finetuned-wikitext2
9
null
transformers
12,769
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7602 | 1.0 | 2334 | 3.6669 | | 3.653 | 2.0 | 4668 | 3.6472 | | 3.6006 | 3.0 | 7002 | 3.6421 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
ltrctelugu/bigram
c3f635e4025f6f0689bddd2e9b097e845cf3d1fa
2022-07-07T01:00:29.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
ltrctelugu
null
ltrctelugu/bigram
9
null
transformers
12,770
hello
ajders/nl_electra
017aa76a8406e047f5175dfc38700c6b3ee2c960
2022-07-27T22:38:48.000Z
[ "pytorch", "electra", "fill-mask", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
fill-mask
false
ajders
null
ajders/nl_electra
9
null
transformers
12,771
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: nl_electra results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nl_electra This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4650 - Accuracy: 0.5392 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 703 - gradient_accumulation_steps: 32 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 8000 - num_epochs: 400.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:------:|:---------------:|:--------:| | No log | 0.67 | 500 | 9.9977 | 0.0486 | | No log | 1.35 | 1000 | 9.5620 | 0.0543 | | No log | 2.02 | 1500 | 8.9306 | 0.0741 | | No log | 2.69 | 2000 | 8.2617 | 0.0826 | | No log | 3.36 | 2500 | 7.6880 | 0.0792 | | No log | 4.04 | 3000 | 7.3316 | 0.0757 | | No log | 4.71 | 3500 | 7.1944 | 0.0747 | | No log | 5.38 | 4000 | 7.1349 | 0.0802 | | No log | 6.06 | 4500 | 7.0752 | 0.0887 | | 8.201 | 6.73 | 5000 | 7.0046 | 0.1021 | | 8.201 | 7.4 | 5500 | 6.9295 | 0.1090 | | 8.201 | 8.08 | 6000 | 6.8483 | 0.1132 | | 8.201 | 8.75 | 6500 | 6.7750 | 0.1171 | | 8.201 | 9.42 | 7000 | 6.7116 | 0.1187 | | 8.201 | 10.09 | 7500 | 6.6560 | 0.1218 | | 8.201 | 10.77 | 8000 | 6.6178 | 0.1239 | | 8.201 | 11.44 | 8500 | 6.5824 | 0.1255 | | 8.201 | 12.11 | 9000 | 6.5521 | 0.1273 | | 8.201 | 12.79 | 9500 | 6.5203 | 0.1292 | | 6.7257 | 13.46 | 10000 | 6.5027 | 0.1303 | | 6.7257 | 14.13 | 10500 | 6.4809 | 0.1314 | | 6.7257 | 14.8 | 11000 | 6.4631 | 0.1322 | | 6.7257 | 15.48 | 11500 | 6.4483 | 0.1329 | | 6.7257 | 16.15 | 12000 | 6.4320 | 0.1338 | | 6.7257 | 16.82 | 12500 | 6.4169 | 0.1348 | | 6.7257 | 17.5 | 13000 | 6.4067 | 0.1359 | | 6.7257 | 18.17 | 13500 | 6.3994 | 0.1359 | | 6.7257 | 18.84 | 14000 | 6.3823 | 0.1368 | | 6.7257 | 19.52 | 14500 | 6.3759 | 0.1373 | | 6.4502 | 20.19 | 15000 | 6.3629 | 0.1374 | | 6.4502 | 20.86 | 15500 | 6.3638 | 0.1373 | | 6.4502 | 21.53 | 16000 | 6.3505 | 0.1382 | | 6.4502 | 22.21 | 16500 | 6.3416 | 0.1387 | | 6.4502 | 22.88 | 17000 | 6.3420 | 0.1383 | | 6.4502 | 23.55 | 17500 | 6.3330 | 0.1389 | | 6.4502 | 24.23 | 18000 | 6.3289 | 0.1388 | | 6.4502 | 24.9 | 18500 | 6.3184 | 0.1389 | | 6.4502 | 25.57 | 19000 | 6.3099 | 0.1396 | | 6.4502 | 26.24 | 19500 | 6.2789 | 0.1405 | | 6.3474 | 26.92 | 20000 | 6.2398 | 0.1404 | | 6.3474 | 27.59 | 20500 | 6.2012 | 0.1412 | | 6.3474 | 28.26 | 21000 | 6.1803 | 0.1414 | | 6.3474 | 28.94 | 21500 | 6.1579 | 0.1414 | | 6.3474 | 29.61 | 22000 | 6.1403 | 0.1431 | | 6.3474 | 30.28 | 22500 | 6.1316 | 0.1423 | | 6.3474 | 30.96 | 23000 | 6.1102 | 0.1435 | | 6.3474 | 31.63 | 23500 | 6.0998 | 0.1439 | | 6.3474 | 32.3 | 24000 | 6.0867 | 0.1446 | | 6.3474 | 32.97 | 24500 | 6.0700 | 0.1451 | | 6.1758 | 33.65 | 25000 | 6.0554 | 0.1457 | | 6.1758 | 34.32 | 25500 | 6.0487 | 0.1457 | | 6.1758 | 34.99 | 26000 | 6.0328 | 0.1469 | | 6.1758 | 35.67 | 26500 | 6.0265 | 0.1469 | | 6.1758 | 36.34 | 27000 | 5.9992 | 0.1486 | | 6.1758 | 37.01 | 27500 | 5.9934 | 0.1485 | | 6.1758 | 37.68 | 28000 | 5.9702 | 0.1501 | | 6.1758 | 38.36 | 28500 | 5.9661 | 0.1503 | | 6.1758 | 39.03 | 29000 | 5.9558 | 0.1512 | | 6.1758 | 39.7 | 29500 | 5.9321 | 0.1528 | | 6.052 | 40.38 | 30000 | 5.9147 | 0.1532 | | 6.052 | 41.05 | 30500 | 5.8975 | 0.1545 | | 6.052 | 41.72 | 31000 | 5.8784 | 0.1566 | | 6.052 | 42.4 | 31500 | 5.8584 | 0.1586 | | 6.052 | 43.07 | 32000 | 5.8325 | 0.1603 | | 6.052 | 43.74 | 32500 | 5.7583 | 0.1664 | | 6.052 | 44.41 | 33000 | 5.6158 | 0.1787 | | 6.052 | 45.09 | 33500 | 5.4580 | 0.1917 | | 6.052 | 45.76 | 34000 | 5.3396 | 0.2010 | | 6.052 | 46.43 | 34500 | 5.2568 | 0.2082 | | 5.7995 | 47.11 | 35000 | 5.1775 | 0.2146 | | 5.7995 | 47.78 | 35500 | 5.1076 | 0.2204 | | 5.7995 | 48.45 | 36000 | 5.0457 | 0.2258 | | 5.7995 | 49.13 | 36500 | 4.9932 | 0.2313 | | 5.7995 | 49.8 | 37000 | 4.9164 | 0.2384 | | 5.7995 | 50.47 | 37500 | 4.7844 | 0.2521 | | 5.7995 | 51.14 | 38000 | 4.6598 | 0.2642 | | 5.7995 | 51.82 | 38500 | 4.5472 | 0.2757 | | 5.7995 | 52.49 | 39000 | 4.4374 | 0.2871 | | 5.7995 | 53.16 | 39500 | 4.3399 | 0.2982 | | 5.0341 | 53.84 | 40000 | 4.2549 | 0.3083 | | 5.0341 | 54.51 | 40500 | 4.1795 | 0.3170 | | 5.0341 | 55.18 | 41000 | 4.1017 | 0.3274 | | 5.0341 | 55.85 | 41500 | 4.0308 | 0.3375 | | 5.0341 | 56.53 | 42000 | 3.9673 | 0.3462 | | 5.0341 | 57.2 | 42500 | 3.9130 | 0.3538 | | 5.0341 | 57.87 | 43000 | 3.8672 | 0.3599 | | 5.0341 | 58.55 | 43500 | 3.8249 | 0.3656 | | 5.0341 | 59.22 | 44000 | 3.7748 | 0.3728 | | 5.0341 | 59.89 | 44500 | 3.7459 | 0.3768 | | 4.2119 | 60.57 | 45000 | 3.7089 | 0.3808 | | 4.2119 | 61.24 | 45500 | 3.6732 | 0.3857 | | 4.2119 | 61.91 | 46000 | 3.6546 | 0.3881 | | 4.2119 | 62.58 | 46500 | 3.6205 | 0.3921 | | 4.2119 | 63.26 | 47000 | 3.5908 | 0.3960 | | 4.2119 | 63.93 | 47500 | 3.5627 | 0.3991 | | 4.2119 | 64.6 | 48000 | 3.5466 | 0.4019 | | 4.2119 | 65.28 | 48500 | 3.5262 | 0.4039 | | 4.2119 | 65.95 | 49000 | 3.4987 | 0.4074 | | 4.2119 | 66.62 | 49500 | 3.4817 | 0.4093 | | 3.8182 | 67.29 | 50000 | 3.4608 | 0.4119 | | 3.8182 | 67.97 | 50500 | 3.4467 | 0.4142 | | 3.8182 | 68.64 | 51000 | 3.4280 | 0.4163 | | 3.8182 | 69.31 | 51500 | 3.4165 | 0.4175 | | 3.8182 | 69.99 | 52000 | 3.3970 | 0.4199 | | 3.8182 | 70.66 | 52500 | 3.3738 | 0.4227 | | 3.8182 | 71.33 | 53000 | 3.3640 | 0.4242 | | 3.8182 | 72.01 | 53500 | 3.3583 | 0.4252 | | 3.8182 | 72.68 | 54000 | 3.3319 | 0.4279 | | 3.8182 | 73.35 | 54500 | 3.3153 | 0.4303 | | 3.5946 | 74.02 | 55000 | 3.3098 | 0.4304 | | 3.5946 | 74.7 | 55500 | 3.2949 | 0.4328 | | 3.5946 | 75.37 | 56000 | 3.2820 | 0.4335 | | 3.5946 | 76.04 | 56500 | 3.2686 | 0.4355 | | 3.5946 | 76.72 | 57000 | 3.2663 | 0.4359 | | 3.5946 | 77.39 | 57500 | 3.2482 | 0.4379 | | 3.5946 | 78.06 | 58000 | 3.2344 | 0.4393 | | 3.5946 | 78.73 | 58500 | 3.2281 | 0.4407 | | 3.5946 | 79.41 | 59000 | 3.2172 | 0.4412 | | 3.5946 | 80.08 | 59500 | 3.2110 | 0.4420 | | 3.4435 | 80.75 | 60000 | 3.1940 | 0.4443 | | 3.4435 | 81.43 | 60500 | 3.1837 | 0.4455 | | 3.4435 | 82.1 | 61000 | 3.1744 | 0.4469 | | 3.4435 | 82.77 | 61500 | 3.1611 | 0.4483 | | 3.4435 | 83.45 | 62000 | 3.1531 | 0.4496 | | 3.4435 | 84.12 | 62500 | 3.1524 | 0.4499 | | 3.4435 | 84.79 | 63000 | 3.1431 | 0.4501 | | 3.4435 | 85.46 | 63500 | 3.1287 | 0.4527 | | 3.4435 | 86.14 | 64000 | 3.1192 | 0.4533 | | 3.4435 | 86.81 | 64500 | 3.1107 | 0.4547 | | 3.3301 | 87.48 | 65000 | 3.1041 | 0.4553 | | 3.3301 | 88.16 | 65500 | 3.0999 | 0.4562 | | 3.3301 | 88.83 | 66000 | 3.0882 | 0.4576 | | 3.3301 | 89.5 | 66500 | 3.0777 | 0.4589 | | 3.3301 | 90.17 | 67000 | 3.0726 | 0.4588 | | 3.3301 | 90.85 | 67500 | 3.0676 | 0.4601 | | 3.3301 | 91.52 | 68000 | 3.0616 | 0.4602 | | 3.3301 | 92.19 | 68500 | 3.0523 | 0.4621 | | 3.3301 | 92.87 | 69000 | 3.0464 | 0.4624 | | 3.3301 | 93.54 | 69500 | 3.0405 | 0.4635 | | 3.2418 | 94.21 | 70000 | 3.0312 | 0.4649 | | 3.2418 | 94.89 | 70500 | 3.0209 | 0.4653 | | 3.2418 | 95.56 | 71000 | 3.0202 | 0.4657 | | 3.2418 | 96.23 | 71500 | 3.0101 | 0.4676 | | 3.2418 | 96.9 | 72000 | 3.0105 | 0.4666 | | 3.2418 | 97.58 | 72500 | 3.0023 | 0.4685 | | 3.2418 | 98.25 | 73000 | 3.0008 | 0.4680 | | 3.2418 | 98.92 | 73500 | 2.9882 | 0.4691 | | 3.2418 | 99.6 | 74000 | 2.9855 | 0.4702 | | 3.2418 | 100.27 | 74500 | 2.9790 | 0.4709 | | 3.1698 | 100.94 | 75000 | 2.9680 | 0.4716 | | 3.1698 | 101.61 | 75500 | 2.9667 | 0.4724 | | 3.1698 | 102.29 | 76000 | 2.9657 | 0.4726 | | 3.1698 | 102.96 | 76500 | 2.9623 | 0.4731 | | 3.1698 | 103.63 | 77000 | 2.9515 | 0.4745 | | 3.1698 | 104.31 | 77500 | 2.9471 | 0.4753 | | 3.1698 | 104.98 | 78000 | 2.9407 | 0.4756 | | 3.1698 | 105.65 | 78500 | 2.9388 | 0.4761 | | 3.1698 | 106.33 | 79000 | 2.9369 | 0.4766 | | 3.1698 | 107.0 | 79500 | 2.9297 | 0.4762 | | 3.1101 | 107.67 | 80000 | 2.9291 | 0.4776 | | 3.1101 | 108.34 | 80500 | 2.9139 | 0.4788 | | 3.1101 | 109.02 | 81000 | 2.9113 | 0.4790 | | 3.1101 | 109.69 | 81500 | 2.9057 | 0.4798 | | 3.1101 | 110.36 | 82000 | 2.9058 | 0.4804 | | 3.1101 | 111.04 | 82500 | 2.9019 | 0.4807 | | 3.1101 | 111.71 | 83000 | 2.8934 | 0.4818 | | 3.1101 | 112.38 | 83500 | 2.8864 | 0.4825 | | 3.1101 | 113.06 | 84000 | 2.8926 | 0.4815 | | 3.1101 | 113.73 | 84500 | 2.8812 | 0.4830 | | 3.058 | 114.4 | 85000 | 2.8740 | 0.4840 | | 3.058 | 115.07 | 85500 | 2.8730 | 0.4840 | | 3.058 | 115.75 | 86000 | 2.8694 | 0.4847 | | 3.058 | 116.42 | 86500 | 2.8694 | 0.4848 | | 3.058 | 117.09 | 87000 | 2.8601 | 0.4862 | | 3.058 | 117.77 | 87500 | 2.8547 | 0.4862 | | 3.058 | 118.44 | 88000 | 2.8538 | 0.4861 | | 3.058 | 119.11 | 88500 | 2.8494 | 0.4876 | | 3.058 | 119.78 | 89000 | 2.8430 | 0.4882 | | 3.058 | 120.46 | 89500 | 2.8436 | 0.4875 | | 3.0129 | 121.13 | 90000 | 2.8402 | 0.4884 | | 3.0129 | 121.8 | 90500 | 2.8353 | 0.4888 | | 3.0129 | 122.48 | 91000 | 2.8271 | 0.4896 | | 3.0129 | 123.15 | 91500 | 2.8236 | 0.4900 | | 3.0129 | 123.82 | 92000 | 2.8199 | 0.4913 | | 3.0129 | 124.5 | 92500 | 2.8119 | 0.4916 | | 3.0129 | 125.17 | 93000 | 2.8138 | 0.4916 | | 3.0129 | 125.84 | 93500 | 2.8089 | 0.4925 | | 3.0129 | 126.51 | 94000 | 2.8067 | 0.4928 | | 3.0129 | 127.19 | 94500 | 2.8010 | 0.4939 | | 2.9701 | 127.86 | 95000 | 2.7992 | 0.4938 | | 2.9701 | 128.53 | 95500 | 2.7953 | 0.4948 | | 2.9701 | 129.21 | 96000 | 2.7964 | 0.4942 | | 2.9701 | 129.88 | 96500 | 2.7838 | 0.4955 | | 2.9701 | 130.55 | 97000 | 2.7770 | 0.4968 | | 2.9701 | 131.22 | 97500 | 2.7800 | 0.4962 | | 2.9701 | 131.9 | 98000 | 2.7743 | 0.4972 | | 2.9701 | 132.57 | 98500 | 2.7696 | 0.4973 | | 2.9701 | 133.24 | 99000 | 2.7691 | 0.4980 | | 2.9701 | 133.92 | 99500 | 2.7612 | 0.4989 | | 2.9289 | 134.59 | 100000 | 2.7606 | 0.4987 | | 2.9289 | 135.26 | 100500 | 2.7545 | 0.4993 | | 2.9289 | 135.94 | 101000 | 2.7544 | 0.4999 | | 2.9289 | 136.61 | 101500 | 2.7550 | 0.4999 | | 2.9289 | 137.28 | 102000 | 2.7510 | 0.5001 | | 2.9289 | 137.95 | 102500 | 2.7480 | 0.5002 | | 2.9289 | 138.63 | 103000 | 2.7422 | 0.5012 | | 2.9289 | 139.3 | 103500 | 2.7419 | 0.5014 | | 2.9289 | 139.97 | 104000 | 2.7416 | 0.5009 | | 2.9289 | 140.65 | 104500 | 2.7412 | 0.5017 | | 2.8968 | 141.32 | 105000 | 2.7356 | 0.5023 | | 2.8968 | 141.99 | 105500 | 2.7303 | 0.5027 | | 2.8968 | 142.66 | 106000 | 2.7359 | 0.5029 | | 2.8968 | 143.34 | 106500 | 2.7283 | 0.5032 | | 2.8968 | 144.01 | 107000 | 2.7226 | 0.5033 | | 2.8968 | 144.68 | 107500 | 2.7247 | 0.5039 | | 2.8968 | 145.36 | 108000 | 2.7209 | 0.5044 | | 2.8968 | 146.03 | 108500 | 2.7210 | 0.5039 | | 2.8968 | 146.7 | 109000 | 2.7135 | 0.5046 | | 2.8968 | 147.38 | 109500 | 2.7139 | 0.5048 | | 2.8697 | 148.05 | 110000 | 2.7167 | 0.5050 | | 2.8697 | 148.72 | 110500 | 2.7125 | 0.5058 | | 2.8697 | 149.39 | 111000 | 2.7064 | 0.5060 | | 2.8697 | 150.07 | 111500 | 2.7024 | 0.5067 | | 2.8697 | 150.74 | 112000 | 2.7035 | 0.5067 | | 2.8697 | 151.41 | 112500 | 2.7034 | 0.5067 | | 2.8697 | 152.09 | 113000 | 2.6967 | 0.5073 | | 2.8697 | 152.76 | 113500 | 2.6982 | 0.5070 | | 2.8697 | 153.43 | 114000 | 2.6948 | 0.5079 | | 2.8697 | 154.1 | 114500 | 2.6946 | 0.5076 | | 2.8457 | 154.78 | 115000 | 2.6918 | 0.5078 | | 2.8457 | 155.45 | 115500 | 2.6917 | 0.5078 | | 2.8457 | 156.12 | 116000 | 2.6868 | 0.5086 | | 2.8457 | 156.8 | 116500 | 2.6870 | 0.5084 | | 2.8457 | 157.47 | 117000 | 2.6830 | 0.5091 | | 2.8457 | 158.14 | 117500 | 2.6824 | 0.5090 | | 2.8457 | 158.82 | 118000 | 2.6812 | 0.5092 | | 2.8457 | 159.49 | 118500 | 2.6747 | 0.5098 | | 2.8457 | 160.16 | 119000 | 2.6747 | 0.5105 | | 2.8457 | 160.83 | 119500 | 2.6750 | 0.5102 | | 2.825 | 161.51 | 120000 | 2.6761 | 0.5102 | | 2.825 | 162.18 | 120500 | 2.6670 | 0.5115 | | 2.825 | 162.85 | 121000 | 2.6740 | 0.5104 | | 2.825 | 163.53 | 121500 | 2.6700 | 0.5106 | | 2.825 | 164.2 | 122000 | 2.6629 | 0.5119 | | 2.825 | 164.87 | 122500 | 2.6642 | 0.5117 | | 2.825 | 165.54 | 123000 | 2.6664 | 0.5109 | | 2.825 | 166.22 | 123500 | 2.6626 | 0.5117 | | 2.825 | 166.89 | 124000 | 2.6561 | 0.5130 | | 2.825 | 167.56 | 124500 | 2.6612 | 0.5125 | | 2.8059 | 168.24 | 125000 | 2.6594 | 0.5123 | | 2.8059 | 168.91 | 125500 | 2.6508 | 0.5132 | | 2.8059 | 169.58 | 126000 | 2.6477 | 0.5134 | | 2.8059 | 170.26 | 126500 | 2.6527 | 0.5133 | | 2.8059 | 170.93 | 127000 | 2.6519 | 0.5136 | | 2.8059 | 171.6 | 127500 | 2.6456 | 0.5141 | | 2.8059 | 172.27 | 128000 | 2.6473 | 0.5139 | | 2.8059 | 172.95 | 128500 | 2.6426 | 0.5144 | | 2.8059 | 173.62 | 129000 | 2.6454 | 0.5137 | | 2.8059 | 174.29 | 129500 | 2.6427 | 0.5147 | | 2.788 | 174.97 | 130000 | 2.6376 | 0.5150 | | 2.788 | 175.64 | 130500 | 2.6366 | 0.5154 | | 2.788 | 176.31 | 131000 | 2.6327 | 0.5156 | | 2.788 | 176.98 | 131500 | 2.6328 | 0.5157 | | 2.788 | 177.66 | 132000 | 2.6335 | 0.5156 | | 2.788 | 178.33 | 132500 | 2.6302 | 0.5166 | | 2.788 | 179.0 | 133000 | 2.6333 | 0.5160 | | 2.788 | 179.68 | 133500 | 2.6253 | 0.5171 | | 2.788 | 180.35 | 134000 | 2.6237 | 0.5167 | | 2.788 | 181.02 | 134500 | 2.6269 | 0.5165 | | 2.7723 | 181.7 | 135000 | 2.6283 | 0.5164 | | 2.7723 | 182.37 | 135500 | 2.6255 | 0.5174 | | 2.7723 | 183.04 | 136000 | 2.6254 | 0.5175 | | 2.7723 | 183.71 | 136500 | 2.6231 | 0.5172 | | 2.7723 | 184.39 | 137000 | 2.6181 | 0.5173 | | 2.7723 | 185.06 | 137500 | 2.6260 | 0.5168 | | 2.7723 | 185.73 | 138000 | 2.6183 | 0.5176 | | 2.7723 | 186.41 | 138500 | 2.6174 | 0.5182 | | 2.7723 | 187.08 | 139000 | 2.6144 | 0.5182 | | 2.7723 | 187.75 | 139500 | 2.6152 | 0.5186 | | 2.7575 | 188.43 | 140000 | 2.6150 | 0.5183 | | 2.7575 | 189.1 | 140500 | 2.6110 | 0.5190 | | 2.7575 | 189.77 | 141000 | 2.6044 | 0.5194 | | 2.7575 | 190.44 | 141500 | 2.6083 | 0.5186 | | 2.7575 | 191.12 | 142000 | 2.6102 | 0.5189 | | 2.7575 | 191.79 | 142500 | 2.6063 | 0.5195 | | 2.7575 | 192.46 | 143000 | 2.6071 | 0.5198 | | 2.7575 | 193.14 | 143500 | 2.6090 | 0.5191 | | 2.7575 | 193.81 | 144000 | 2.6068 | 0.5200 | | 2.7575 | 194.48 | 144500 | 2.6032 | 0.5200 | | 2.7445 | 195.15 | 145000 | 2.6031 | 0.5200 | | 2.7445 | 195.83 | 145500 | 2.6007 | 0.5201 | | 2.7445 | 196.5 | 146000 | 2.5998 | 0.5203 | | 2.7445 | 197.17 | 146500 | 2.5980 | 0.5208 | | 2.7445 | 197.85 | 147000 | 2.5952 | 0.5211 | | 2.7445 | 198.52 | 147500 | 2.5977 | 0.5210 | | 2.7445 | 199.19 | 148000 | 2.5922 | 0.5212 | | 2.7445 | 199.87 | 148500 | 2.5936 | 0.5211 | | 2.7445 | 200.54 | 149000 | 2.5933 | 0.5219 | | 2.7445 | 201.21 | 149500 | 2.5887 | 0.5219 | | 2.7324 | 201.88 | 150000 | 2.5911 | 0.5215 | | 2.7324 | 202.56 | 150500 | 2.5892 | 0.5219 | | 2.7324 | 203.23 | 151000 | 2.5875 | 0.5218 | | 2.7324 | 203.9 | 151500 | 2.5898 | 0.5220 | | 2.7324 | 204.58 | 152000 | 2.5872 | 0.5223 | | 2.7324 | 205.25 | 152500 | 2.5805 | 0.5226 | | 2.7324 | 205.92 | 153000 | 2.5861 | 0.5225 | | 2.7324 | 206.59 | 153500 | 2.5839 | 0.5223 | | 2.7324 | 207.27 | 154000 | 2.5804 | 0.5232 | | 2.7324 | 207.94 | 154500 | 2.5766 | 0.5235 | | 2.7212 | 208.61 | 155000 | 2.5764 | 0.5233 | | 2.7212 | 209.29 | 155500 | 2.5801 | 0.5233 | | 2.7212 | 209.96 | 156000 | 2.5737 | 0.5241 | | 2.7212 | 210.63 | 156500 | 2.5769 | 0.5236 | | 2.7212 | 211.31 | 157000 | 2.5769 | 0.5237 | | 2.7212 | 211.98 | 157500 | 2.5748 | 0.5239 | | 2.7212 | 212.65 | 158000 | 2.5745 | 0.5230 | | 2.7212 | 213.32 | 158500 | 2.5725 | 0.5240 | | 2.7212 | 214.0 | 159000 | 2.5736 | 0.5239 | | 2.7212 | 214.67 | 159500 | 2.5675 | 0.5252 | | 2.7103 | 215.34 | 160000 | 2.5678 | 0.5245 | | 2.7103 | 216.02 | 160500 | 2.5691 | 0.5250 | | 2.7103 | 216.69 | 161000 | 2.5688 | 0.5245 | | 2.7103 | 217.36 | 161500 | 2.5681 | 0.5251 | | 2.7103 | 218.03 | 162000 | 2.5582 | 0.5255 | | 2.7103 | 218.71 | 162500 | 2.5675 | 0.5247 | | 2.7103 | 219.38 | 163000 | 2.5609 | 0.5259 | | 2.7103 | 220.05 | 163500 | 2.5625 | 0.5254 | | 2.7103 | 220.73 | 164000 | 2.5630 | 0.5254 | | 2.7103 | 221.4 | 164500 | 2.5607 | 0.5265 | | 2.7003 | 222.07 | 165000 | 2.5615 | 0.5260 | | 2.7003 | 222.75 | 165500 | 2.5660 | 0.5248 | | 2.7003 | 223.42 | 166000 | 2.5569 | 0.5263 | | 2.7003 | 224.09 | 166500 | 2.5610 | 0.5255 | | 2.7003 | 224.76 | 167000 | 2.5569 | 0.5263 | | 2.7003 | 225.44 | 167500 | 2.5534 | 0.5265 | | 2.7003 | 226.11 | 168000 | 2.5573 | 0.5259 | | 2.7003 | 226.78 | 168500 | 2.5559 | 0.5264 | | 2.7003 | 227.46 | 169000 | 2.5508 | 0.5277 | | 2.7003 | 228.13 | 169500 | 2.5500 | 0.5276 | | 2.6915 | 228.8 | 170000 | 2.5501 | 0.5270 | | 2.6915 | 229.47 | 170500 | 2.5508 | 0.5273 | | 2.6915 | 230.15 | 171000 | 2.5523 | 0.5267 | | 2.6915 | 230.82 | 171500 | 2.5464 | 0.5276 | | 2.6915 | 231.49 | 172000 | 2.5482 | 0.5271 | | 2.6915 | 232.17 | 172500 | 2.5486 | 0.5270 | | 2.6915 | 232.84 | 173000 | 2.5474 | 0.5275 | | 2.6915 | 233.51 | 173500 | 2.5483 | 0.5270 | | 2.6915 | 234.19 | 174000 | 2.5480 | 0.5276 | | 2.6915 | 234.86 | 174500 | 2.5486 | 0.5278 | | 2.6833 | 235.53 | 175000 | 2.5484 | 0.5273 | | 2.6833 | 236.2 | 175500 | 2.5436 | 0.5277 | | 2.6833 | 236.88 | 176000 | 2.5448 | 0.5278 | | 2.6833 | 237.55 | 176500 | 2.5430 | 0.5284 | | 2.6833 | 238.22 | 177000 | 2.5433 | 0.5279 | | 2.6833 | 238.9 | 177500 | 2.5398 | 0.5288 | | 2.6833 | 239.57 | 178000 | 2.5424 | 0.5282 | | 2.6833 | 240.24 | 178500 | 2.5371 | 0.5291 | | 2.6833 | 240.91 | 179000 | 2.5372 | 0.5294 | | 2.6833 | 241.59 | 179500 | 2.5368 | 0.5290 | | 2.6753 | 242.26 | 180000 | 2.5383 | 0.5289 | | 2.6753 | 242.93 | 180500 | 2.5387 | 0.5289 | | 2.6753 | 243.61 | 181000 | 2.5351 | 0.5295 | | 2.6753 | 244.28 | 181500 | 2.5340 | 0.5296 | | 2.6753 | 244.95 | 182000 | 2.5349 | 0.5289 | | 2.6753 | 245.63 | 182500 | 2.5358 | 0.5295 | | 2.6753 | 246.3 | 183000 | 2.5333 | 0.5299 | | 2.6753 | 246.97 | 183500 | 2.5363 | 0.5292 | | 2.6753 | 247.64 | 184000 | 2.5323 | 0.5298 | | 2.6753 | 248.32 | 184500 | 2.5286 | 0.5299 | | 2.6679 | 248.99 | 185000 | 2.5276 | 0.5306 | | 2.6679 | 249.66 | 185500 | 2.5249 | 0.5308 | | 2.6679 | 250.34 | 186000 | 2.5308 | 0.5302 | | 2.6679 | 251.01 | 186500 | 2.5307 | 0.5297 | | 2.6679 | 251.68 | 187000 | 2.5293 | 0.5305 | | 2.6679 | 252.36 | 187500 | 2.5255 | 0.5306 | | 2.6679 | 253.03 | 188000 | 2.5244 | 0.5312 | | 2.6679 | 253.7 | 188500 | 2.5278 | 0.5305 | | 2.6679 | 254.37 | 189000 | 2.5212 | 0.5317 | | 2.6679 | 255.05 | 189500 | 2.5256 | 0.5307 | | 2.6611 | 255.72 | 190000 | 2.5273 | 0.5303 | | 2.6611 | 256.39 | 190500 | 2.5222 | 0.5310 | | 2.6611 | 257.07 | 191000 | 2.5237 | 0.5311 | | 2.6611 | 257.74 | 191500 | 2.5258 | 0.5309 | | 2.6611 | 258.41 | 192000 | 2.5219 | 0.5313 | | 2.6611 | 259.08 | 192500 | 2.5243 | 0.5314 | | 2.6611 | 259.76 | 193000 | 2.5203 | 0.5319 | | 2.6611 | 260.43 | 193500 | 2.5205 | 0.5313 | | 2.6611 | 261.1 | 194000 | 2.5205 | 0.5322 | | 2.6611 | 261.78 | 194500 | 2.5196 | 0.5317 | | 2.655 | 262.45 | 195000 | 2.5199 | 0.5315 | | 2.655 | 263.12 | 195500 | 2.5226 | 0.5315 | | 2.655 | 263.8 | 196000 | 2.5175 | 0.5316 | | 2.655 | 264.47 | 196500 | 2.5160 | 0.5322 | | 2.655 | 265.14 | 197000 | 2.5185 | 0.5316 | | 2.655 | 265.81 | 197500 | 2.5133 | 0.5322 | | 2.655 | 266.49 | 198000 | 2.5163 | 0.5318 | | 2.655 | 267.16 | 198500 | 2.5135 | 0.5325 | | 2.655 | 267.83 | 199000 | 2.5132 | 0.5326 | | 2.655 | 268.51 | 199500 | 2.5148 | 0.5323 | | 2.6486 | 269.18 | 200000 | 2.5194 | 0.5317 | | 2.6486 | 269.85 | 200500 | 2.5162 | 0.5321 | | 2.6486 | 270.52 | 201000 | 2.5090 | 0.5332 | | 2.6486 | 271.2 | 201500 | 2.5126 | 0.5325 | | 2.6486 | 271.87 | 202000 | 2.5155 | 0.5320 | | 2.6486 | 272.54 | 202500 | 2.5099 | 0.5329 | | 2.6486 | 273.22 | 203000 | 2.5130 | 0.5325 | | 2.6486 | 273.89 | 203500 | 2.5064 | 0.5329 | | 2.6486 | 274.56 | 204000 | 2.5154 | 0.5319 | | 2.6486 | 275.24 | 204500 | 2.5097 | 0.5329 | | 2.6433 | 275.91 | 205000 | 2.5075 | 0.5334 | | 2.6433 | 276.58 | 205500 | 2.5120 | 0.5325 | | 2.6433 | 277.25 | 206000 | 2.5100 | 0.5329 | | 2.6433 | 277.93 | 206500 | 2.5115 | 0.5332 | | 2.6433 | 278.6 | 207000 | 2.5071 | 0.5332 | | 2.6433 | 279.27 | 207500 | 2.5075 | 0.5335 | | 2.6433 | 279.95 | 208000 | 2.5020 | 0.5338 | | 2.6433 | 280.62 | 208500 | 2.5025 | 0.5340 | | 2.6433 | 281.29 | 209000 | 2.5064 | 0.5333 | | 2.6433 | 281.96 | 209500 | 2.5038 | 0.5336 | | 2.6383 | 282.64 | 210000 | 2.5041 | 0.5340 | | 2.6383 | 283.31 | 210500 | 2.5075 | 0.5336 | | 2.6383 | 283.98 | 211000 | 2.5028 | 0.5333 | | 2.6383 | 284.66 | 211500 | 2.5008 | 0.5340 | | 2.6383 | 285.33 | 212000 | 2.5005 | 0.5345 | | 2.6383 | 286.0 | 212500 | 2.5020 | 0.5334 | | 2.6383 | 286.68 | 213000 | 2.5011 | 0.5344 | | 2.6383 | 287.35 | 213500 | 2.5028 | 0.5338 | | 2.6383 | 288.02 | 214000 | 2.4970 | 0.5340 | | 2.6383 | 288.69 | 214500 | 2.4995 | 0.5343 | | 2.6336 | 289.37 | 215000 | 2.5010 | 0.5343 | | 2.6336 | 290.04 | 215500 | 2.5060 | 0.5336 | | 2.6336 | 290.71 | 216000 | 2.4955 | 0.5347 | | 2.6336 | 291.39 | 216500 | 2.4972 | 0.5349 | | 2.6336 | 292.06 | 217000 | 2.4977 | 0.5349 | | 2.6336 | 292.73 | 217500 | 2.4973 | 0.5346 | | 2.6336 | 293.4 | 218000 | 2.4981 | 0.5346 | | 2.6336 | 294.08 | 218500 | 2.4941 | 0.5346 | | 2.6336 | 294.75 | 219000 | 2.4978 | 0.5350 | | 2.6336 | 295.42 | 219500 | 2.4990 | 0.5343 | | 2.6288 | 296.1 | 220000 | 2.4929 | 0.5347 | | 2.6288 | 296.77 | 220500 | 2.4937 | 0.5349 | | 2.6288 | 297.44 | 221000 | 2.4938 | 0.5349 | | 2.6288 | 298.12 | 221500 | 2.4938 | 0.5343 | | 2.6288 | 298.79 | 222000 | 2.4924 | 0.5354 | | 2.6288 | 299.46 | 222500 | 2.4953 | 0.5348 | | 2.6288 | 300.13 | 223000 | 2.4930 | 0.5347 | | 2.6288 | 300.81 | 223500 | 2.4934 | 0.5353 | | 2.6288 | 301.48 | 224000 | 2.4942 | 0.5348 | | 2.6288 | 302.15 | 224500 | 2.4960 | 0.5344 | | 2.6246 | 302.83 | 225000 | 2.4875 | 0.5357 | | 2.6246 | 303.5 | 225500 | 2.4898 | 0.5355 | | 2.6246 | 304.17 | 226000 | 2.4847 | 0.5366 | | 2.6246 | 304.84 | 226500 | 2.4970 | 0.5348 | | 2.6246 | 305.52 | 227000 | 2.4905 | 0.5356 | | 2.6246 | 306.19 | 227500 | 2.4873 | 0.5361 | | 2.6246 | 306.86 | 228000 | 2.4939 | 0.5350 | | 2.6246 | 307.54 | 228500 | 2.4910 | 0.5360 | | 2.6246 | 308.21 | 229000 | 2.4886 | 0.5355 | | 2.6246 | 308.88 | 229500 | 2.4890 | 0.5369 | | 2.6207 | 309.56 | 230000 | 2.4900 | 0.5360 | | 2.6207 | 310.23 | 230500 | 2.4885 | 0.5354 | | 2.6207 | 310.9 | 231000 | 2.4895 | 0.5358 | | 2.6207 | 311.57 | 231500 | 2.4871 | 0.5358 | | 2.6207 | 312.25 | 232000 | 2.4914 | 0.5352 | | 2.6207 | 312.92 | 232500 | 2.4843 | 0.5366 | | 2.6207 | 313.59 | 233000 | 2.4837 | 0.5365 | | 2.6207 | 314.27 | 233500 | 2.4883 | 0.5360 | | 2.6207 | 314.94 | 234000 | 2.4839 | 0.5366 | | 2.6207 | 315.61 | 234500 | 2.4854 | 0.5366 | | 2.6171 | 316.29 | 235000 | 2.4833 | 0.5367 | | 2.6171 | 316.96 | 235500 | 2.4783 | 0.5374 | | 2.6171 | 317.63 | 236000 | 2.4807 | 0.5370 | | 2.6171 | 318.3 | 236500 | 2.4824 | 0.5366 | | 2.6171 | 318.98 | 237000 | 2.4857 | 0.5361 | | 2.6171 | 319.65 | 237500 | 2.4817 | 0.5366 | | 2.6171 | 320.32 | 238000 | 2.4855 | 0.5364 | | 2.6171 | 321.0 | 238500 | 2.4834 | 0.5367 | | 2.6171 | 321.67 | 239000 | 2.4831 | 0.5363 | | 2.6171 | 322.34 | 239500 | 2.4806 | 0.5370 | | 2.6134 | 323.01 | 240000 | 2.4842 | 0.5365 | | 2.6134 | 323.69 | 240500 | 2.4830 | 0.5364 | | 2.6134 | 324.36 | 241000 | 2.4822 | 0.5367 | | 2.6134 | 325.03 | 241500 | 2.4805 | 0.5373 | | 2.6134 | 325.71 | 242000 | 2.4838 | 0.5365 | | 2.6134 | 326.38 | 242500 | 2.4776 | 0.5371 | | 2.6134 | 327.05 | 243000 | 2.4786 | 0.5376 | | 2.6134 | 327.73 | 243500 | 2.4824 | 0.5371 | | 2.6134 | 328.4 | 244000 | 2.4842 | 0.5363 | | 2.6134 | 329.07 | 244500 | 2.4790 | 0.5375 | | 2.6107 | 329.74 | 245000 | 2.4770 | 0.5372 | | 2.6107 | 330.42 | 245500 | 2.4805 | 0.5375 | | 2.6107 | 331.09 | 246000 | 2.4839 | 0.5370 | | 2.6107 | 331.76 | 246500 | 2.4802 | 0.5367 | | 2.6107 | 332.44 | 247000 | 2.4796 | 0.5373 | | 2.6107 | 333.11 | 247500 | 2.4736 | 0.5377 | | 2.6107 | 333.78 | 248000 | 2.4789 | 0.5374 | | 2.6107 | 334.45 | 248500 | 2.4761 | 0.5375 | | 2.6107 | 335.13 | 249000 | 2.4728 | 0.5379 | | 2.6107 | 335.8 | 249500 | 2.4702 | 0.5386 | | 2.608 | 336.47 | 250000 | 2.4764 | 0.5377 | | 2.608 | 337.15 | 250500 | 2.4738 | 0.5380 | | 2.608 | 337.82 | 251000 | 2.4795 | 0.5371 | | 2.608 | 338.49 | 251500 | 2.4702 | 0.5387 | | 2.608 | 339.17 | 252000 | 2.4823 | 0.5369 | | 2.608 | 339.84 | 252500 | 2.4741 | 0.5382 | | 2.608 | 340.51 | 253000 | 2.4718 | 0.5382 | | 2.608 | 341.18 | 253500 | 2.4731 | 0.5378 | | 2.608 | 341.86 | 254000 | 2.4780 | 0.5373 | | 2.608 | 342.53 | 254500 | 2.4706 | 0.5388 | | 2.6058 | 343.2 | 255000 | 2.4707 | 0.5386 | | 2.6058 | 343.88 | 255500 | 2.4725 | 0.5380 | | 2.6058 | 344.55 | 256000 | 2.4744 | 0.5382 | | 2.6058 | 345.22 | 256500 | 2.4766 | 0.5374 | | 2.6058 | 345.89 | 257000 | 2.4736 | 0.5378 | | 2.6058 | 346.57 | 257500 | 2.4731 | 0.5383 | | 2.6058 | 347.24 | 258000 | 2.4754 | 0.5377 | | 2.6058 | 347.91 | 258500 | 2.4749 | 0.5382 | | 2.6058 | 348.59 | 259000 | 2.4735 | 0.5378 | | 2.6058 | 349.26 | 259500 | 2.4716 | 0.5384 | | 2.6027 | 349.93 | 260000 | 2.4726 | 0.5378 | | 2.6027 | 350.61 | 260500 | 2.4733 | 0.5378 | | 2.6027 | 351.28 | 261000 | 2.4698 | 0.5386 | | 2.6027 | 351.95 | 261500 | 2.4702 | 0.5388 | | 2.6027 | 352.62 | 262000 | 2.4673 | 0.5390 | | 2.6027 | 353.3 | 262500 | 2.4683 | 0.5390 | | 2.6027 | 353.97 | 263000 | 2.4739 | 0.5379 | | 2.6027 | 354.64 | 263500 | 2.4743 | 0.5382 | | 2.6027 | 355.32 | 264000 | 2.4694 | 0.5388 | | 2.6027 | 355.99 | 264500 | 2.4671 | 0.5391 | | 2.6009 | 356.66 | 265000 | 2.4747 | 0.5383 | | 2.6009 | 357.34 | 265500 | 2.4703 | 0.5382 | | 2.6009 | 358.01 | 266000 | 2.4695 | 0.5388 | | 2.6009 | 358.68 | 266500 | 2.4725 | 0.5380 | | 2.6009 | 359.35 | 267000 | 2.4639 | 0.5397 | | 2.6009 | 360.03 | 267500 | 2.4686 | 0.5385 | | 2.6009 | 360.7 | 268000 | 2.4698 | 0.5386 | | 2.6009 | 361.37 | 268500 | 2.4694 | 0.5386 | | 2.6009 | 362.05 | 269000 | 2.4680 | 0.5390 | | 2.6009 | 362.72 | 269500 | 2.4728 | 0.5383 | | 2.5989 | 363.39 | 270000 | 2.4697 | 0.5385 | | 2.5989 | 364.06 | 270500 | 2.4701 | 0.5387 | | 2.5989 | 364.74 | 271000 | 2.4702 | 0.5387 | | 2.5989 | 365.41 | 271500 | 2.4687 | 0.5390 | | 2.5989 | 366.08 | 272000 | 2.4725 | 0.5382 | | 2.5989 | 366.76 | 272500 | 2.4673 | 0.5384 | | 2.5989 | 367.43 | 273000 | 2.4659 | 0.5390 | | 2.5989 | 368.1 | 273500 | 2.4686 | 0.5389 | | 2.5989 | 368.78 | 274000 | 2.4677 | 0.5382 | | 2.5989 | 369.45 | 274500 | 2.4632 | 0.5389 | | 2.5977 | 370.12 | 275000 | 2.4676 | 0.5385 | | 2.5977 | 370.79 | 275500 | 2.4699 | 0.5388 | | 2.5977 | 371.47 | 276000 | 2.4629 | 0.5394 | | 2.5977 | 372.14 | 276500 | 2.4720 | 0.5380 | | 2.5977 | 372.81 | 277000 | 2.4678 | 0.5391 | | 2.5977 | 373.49 | 277500 | 2.4643 | 0.5396 | | 2.5977 | 374.16 | 278000 | 2.4654 | 0.5395 | | 2.5977 | 374.83 | 278500 | 2.4645 | 0.5395 | | 2.5977 | 375.5 | 279000 | 2.4649 | 0.5391 | | 2.5977 | 376.18 | 279500 | 2.4639 | 0.5392 | | 2.5961 | 376.85 | 280000 | 2.4659 | 0.5389 | | 2.5961 | 377.52 | 280500 | 2.4681 | 0.5385 | | 2.5961 | 378.2 | 281000 | 2.4641 | 0.5390 | | 2.5961 | 378.87 | 281500 | 2.4658 | 0.5393 | | 2.5961 | 379.54 | 282000 | 2.4687 | 0.5388 | | 2.5961 | 380.22 | 282500 | 2.4690 | 0.5385 | | 2.5961 | 380.89 | 283000 | 2.4679 | 0.5391 | | 2.5961 | 381.56 | 283500 | 2.4612 | 0.5395 | | 2.5961 | 382.23 | 284000 | 2.4624 | 0.5395 | | 2.5961 | 382.91 | 284500 | 2.4668 | 0.5390 | | 2.5947 | 383.58 | 285000 | 2.4663 | 0.5389 | | 2.5947 | 384.25 | 285500 | 2.4654 | 0.5387 | | 2.5947 | 384.93 | 286000 | 2.4708 | 0.5385 | | 2.5947 | 385.6 | 286500 | 2.4669 | 0.5388 | | 2.5947 | 386.27 | 287000 | 2.4612 | 0.5396 | | 2.5947 | 386.94 | 287500 | 2.4666 | 0.5392 | | 2.5947 | 387.62 | 288000 | 2.4653 | 0.5393 | | 2.5947 | 388.29 | 288500 | 2.4666 | 0.5390 | | 2.5947 | 388.96 | 289000 | 2.4684 | 0.5388 | | 2.5947 | 389.64 | 289500 | 2.4660 | 0.5394 | | 2.5936 | 390.31 | 290000 | 2.4642 | 0.5395 | | 2.5936 | 390.98 | 290500 | 2.4627 | 0.5403 | | 2.5936 | 391.66 | 291000 | 2.4683 | 0.5389 | | 2.5936 | 392.33 | 291500 | 2.4667 | 0.5387 | | 2.5936 | 393.0 | 292000 | 2.4660 | 0.5389 | | 2.5936 | 393.67 | 292500 | 2.4673 | 0.5390 | | 2.5936 | 394.35 | 293000 | 2.4645 | 0.5391 | | 2.5936 | 395.02 | 293500 | 2.4693 | 0.5389 | | 2.5936 | 395.69 | 294000 | 2.4692 | 0.5385 | | 2.5936 | 396.37 | 294500 | 2.4653 | 0.5385 | | 2.5934 | 397.04 | 295000 | 2.4661 | 0.5390 | | 2.5934 | 397.71 | 295500 | 2.4630 | 0.5394 | | 2.5934 | 398.38 | 296000 | 2.4641 | 0.5390 | | 2.5934 | 399.06 | 296500 | 2.4636 | 0.5392 | | 2.5934 | 399.73 | 297000 | 2.4650 | 0.5392 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
FahimFerdous/DialoGPT-small-cat
9633fc76b4c339e02951c63953388c42b79b3941
2022-07-08T02:28:41.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
FahimFerdous
null
FahimFerdous/DialoGPT-small-cat
9
null
transformers
12,772
--- tags: - conversational --- #Cat DialoGPT Model
ChauNguyen23/phobert-base-finetuned-imdb
2b82f37c7083e2c66036ecbf8f273b0b5d4a60c6
2022-07-08T05:03:20.000Z
[ "pytorch", "tensorboard", "roberta", "fill-mask", "dataset:imdb", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
fill-mask
false
ChauNguyen23
null
ChauNguyen23/phobert-base-finetuned-imdb
9
null
transformers
12,773
--- tags: - generated_from_trainer datasets: - imdb model-index: - name: phobert-base-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phobert-base-finetuned-imdb This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.6149 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.3266 | 1.0 | 157 | 2.7949 | | 2.9162 | 2.0 | 314 | 2.6515 | | 2.8177 | 3.0 | 471 | 2.6452 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
mesolitica/t5-small-finetuned-noisy-ms-en
d0a4b4ba07b70d92d85f9b2c87e7fc52329bec5f
2022-07-11T11:05:15.000Z
[ "pytorch", "tf", "tensorboard", "t5", "text2text-generation", "transformers", "generated_from_keras_callback", "model-index", "autotrain_compatible" ]
text2text-generation
false
mesolitica
null
mesolitica/t5-small-finetuned-noisy-ms-en
9
null
transformers
12,774
--- tags: - generated_from_keras_callback model-index: - name: t5-small-finetuned-noisy-ms-en results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-noisy-ms-en This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.19.0 - TensorFlow 2.6.0 - Datasets 2.1.0 - Tokenizers 0.12.1
jonatasgrosman/exp_w2v2t_fa_hubert_s889
0e8ef33b34e52945b106cbb382cbbdb3c26e3632
2022-07-09T20:36:47.000Z
[ "pytorch", "hubert", "automatic-speech-recognition", "fa", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fa_hubert_s889
9
null
transformers
12,775
--- language: - fa license: apache-2.0 tags: - automatic-speech-recognition - fa datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fa_hubert_s889 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
Dror/finetuning-sentiment-model-3000-samples
e256c1cacaf63dbf20a1bc80ca1eba3259a4d91f
2022-07-10T11:14:36.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:imdb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
Dror
null
Dror/finetuning-sentiment-model-3000-samples
9
null
transformers
12,776
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.87 - name: F1 type: f1 value: 0.8721311475409836 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2979 - Accuracy: 0.87 - F1: 0.8721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
jonatasgrosman/exp_w2v2t_ru_vp-fr_s730
4cb1b538dfee2ca137945bf84a2bad1f54b3e371
2022-07-11T08:58:28.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_ru_vp-fr_s730
9
null
transformers
12,777
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_vp-fr_s730 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
GhostZen/distilbert-base-uncased-finetuned-squad
fb5286c869b0772651a678d46c6efc3a8b8f4663
2022-07-11T10:38:10.000Z
[ "pytorch", "tensorboard", "distilbert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
GhostZen
null
GhostZen/distilbert-base-uncased-finetuned-squad
9
null
transformers
12,778
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
jonatasgrosman/exp_w2v2t_ru_vp-it_s533
1c22b57df6d312c153cc9c44fec464cf593215f3
2022-07-11T10:09:38.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_ru_vp-it_s533
9
null
transformers
12,779
--- language: - ru license: apache-2.0 tags: - automatic-speech-recognition - ru datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_ru_vp-it_s533 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
ericntay/clinical_bert_ft
c1f8110c791e10715db692cbee20852a6fa1ea6b
2022-07-11T15:30:06.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
ericntay
null
ericntay/clinical_bert_ft
9
null
transformers
12,780
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: clinical_bert_ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clinical_bert_ft This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2439 - F1: 0.8252 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5938 | 1.0 | 95 | 0.2480 | 0.7084 | | 0.1567 | 2.0 | 190 | 0.2035 | 0.7855 | | 0.083 | 3.0 | 285 | 0.2002 | 0.8026 | | 0.0482 | 4.0 | 380 | 0.2046 | 0.8118 | | 0.0269 | 5.0 | 475 | 0.2230 | 0.8143 | | 0.0185 | 6.0 | 570 | 0.2178 | 0.8175 | | 0.0123 | 7.0 | 665 | 0.2269 | 0.8253 | | 0.0093 | 8.0 | 760 | 0.2421 | 0.8227 | | 0.0072 | 9.0 | 855 | 0.2446 | 0.8267 | | 0.006 | 10.0 | 950 | 0.2439 | 0.8252 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
ariesutiono/finetuned-test-1
2c6c6fc26dd7cdeb8474bcd805edf069d5549ad3
2022-07-11T14:57:10.000Z
[ "pytorch", "tensorboard", "bert", "fill-mask", "dataset:conll2003", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
ariesutiono
null
ariesutiono/finetuned-test-1
9
null
transformers
12,781
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 model-index: - name: finetuned-test-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-test-1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 1.8192 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.8219 | 1.0 | 30 | 2.3343 | | 2.4148 | 2.0 | 60 | 2.2010 | | 2.3236 | 3.0 | 90 | 2.1442 | | 2.2231 | 4.0 | 120 | 2.1651 | | 2.2171 | 5.0 | 150 | 2.0614 | | 2.127 | 6.0 | 180 | 2.0405 | | 2.0748 | 7.0 | 210 | 2.0092 | | 2.0511 | 8.0 | 240 | 1.9798 | | 2.0097 | 9.0 | 270 | 1.8662 | | 1.9969 | 10.0 | 300 | 1.9257 | | 2.0006 | 11.0 | 330 | 1.9386 | | 1.9273 | 12.0 | 360 | 1.9357 | | 1.9177 | 13.0 | 390 | 1.8983 | | 1.9128 | 14.0 | 420 | 1.8990 | | 1.8979 | 15.0 | 450 | 1.9037 | | 1.8721 | 16.0 | 480 | 1.8440 | | 1.8998 | 17.0 | 510 | 1.8404 | | 1.8862 | 18.0 | 540 | 1.9193 | | 1.9133 | 19.0 | 570 | 1.8494 | | 1.8799 | 20.0 | 600 | 1.8192 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
nawta/wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained
ff63b9397491fb68ad2229780ee70f5865dea53f
2022-07-12T10:20:53.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "model-index" ]
automatic-speech-recognition
false
nawta
null
nawta/wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained
9
null
transformers
12,782
--- tags: - generated_from_trainer model-index: - name: wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained This model is a fine-tuned version of [/root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin](https://huggingface.co//root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2963 - Cer: 0.9002 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.3287 | 23.81 | 500 | 2.2963 | 0.9002 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
ArneD/xlm-roberta-base-finetuned-panx-all
d4853ec206fd0e24150548799d983e33d50bf5d7
2022-07-12T07:50:58.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
ArneD
null
ArneD/xlm-roberta-base-finetuned-panx-all
9
null
transformers
12,783
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset (EN, FR, DE, IT). It achieves the following results on the evaluation set: - Loss: 0.1769 - F1: 0.8535 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2934 | 1.0 | 835 | 0.1853 | 0.8250 | | 0.1569 | 2.0 | 1670 | 0.1714 | 0.8438 | | 0.1008 | 3.0 | 2505 | 0.1769 | 0.8535 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
xyma/PROP-marco-step400k
c682b151910d20ad391e1051e3451c4146dda2b0
2022-07-12T11:53:02.000Z
[ "pytorch", "bert", "pretraining", "en", "dataset:msmarco", "arxiv:2010.10137", "transformers", "PROP", "Pretrain4IR", "license:apache-2.0" ]
null
false
xyma
null
xyma/PROP-marco-step400k
9
null
transformers
12,784
--- language: en tags: - PROP - Pretrain4IR license: apache-2.0 datasets: - msmarco --- # PROP-marco-step400k **PROP**, **P**re-training with **R**epresentative w**O**rds **P**rediction, is a new pre-training method tailored for ad-hoc retrieval. PROP is inspired by the classical statistical language model for IR, specifically the query likelihood model, which assumes that the query is generated as the piece of text representative of the “ideal” document. Based on this idea, we construct the representative words prediction (ROP) task for pre-training. The full paper can be found [here](https://arxiv.org/pdf/2010.10137.pdf). This model is pre-trained with more steps than [PROP-marco](https://huggingface.co/xyma/PROP-marco) on MS MARCO document corpus, and used at the MS MARCO Document Ranking Leaderboard where we reached 1st place. # Citation If you find our work useful, please consider citing our paper: ```bibtex @inproceedings{DBLP:conf/wsdm/MaGZFJC21, author = {Xinyu Ma and Jiafeng Guo and Ruqing Zhang and Yixing Fan and Xiang Ji and Xueqi Cheng}, editor = {Liane Lewin{-}Eytan and David Carmel and Elad Yom{-}Tov and Eugene Agichtein and Evgeniy Gabrilovich}, title = {{PROP:} Pre-training with Representative Words Prediction for Ad-hoc Retrieval}, booktitle = {{WSDM} '21, The Fourteenth {ACM} International Conference on Web Search and Data Mining, Virtual Event, Israel, March 8-12, 2021}, pages = {283--291}, publisher = {{ACM}}, year = {2021}, url = {https://doi.org/10.1145/3437963.3441777}, doi = {10.1145/3437963.3441777}, timestamp = {Wed, 07 Apr 2021 16:17:44 +0200}, biburl = {https://dblp.org/rec/conf/wsdm/MaGZFJC21.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
andy-0v0/orcs-and-friends
90f151bd95011f3bdde11accaed0e118598e61f4
2022-07-12T16:03:57.000Z
[ "pytorch", "tensorboard", "vit", "image-classification", "transformers", "huggingpics", "model-index" ]
image-classification
false
andy-0v0
null
andy-0v0/orcs-and-friends
9
null
transformers
12,785
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: orcs-and-friends results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.522522509098053 --- # orcs-and-friends Five-way classifier for orcs and their friends Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### goblin ![goblin](images/goblin.jpg) #### gremlin ![gremlin](images/gremlin.jpg) #### ogre ![ogre](images/ogre.jpg) #### orc ![orc](images/orc.jpg) #### troll ![troll](images/troll.jpg)
annahaz/xlm-roberta-base-misogyny-sexism-out-of-sample-test-opt-bal
dd06d3b0b75c81669112d3f9c6d0c9c5ef5c9816
2022-07-12T22:22:15.000Z
[ "pytorch", "xlm-roberta", "text-classification", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
annahaz
null
annahaz/xlm-roberta-base-misogyny-sexism-out-of-sample-test-opt-bal
9
null
transformers
12,786
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: xlm-roberta-base-misogyny-sexism-out-of-sample-test-opt-bal results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-misogyny-sexism-out-of-sample-test-opt-bal This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2811 - Accuracy: 0.6022 - F1: 0.5689 - Precision: 0.5624 - Recall: 0.5756 - Mae: 0.3978 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:| | 0.4434 | 1.0 | 1100 | 0.8792 | 0.5752 | 0.4897 | 0.5414 | 0.4469 | 0.4248 | | 0.3592 | 2.0 | 2200 | 1.0511 | 0.5882 | 0.4597 | 0.5723 | 0.3841 | 0.4118 | | 0.3351 | 3.0 | 3300 | 0.8862 | 0.5639 | 0.5437 | 0.5199 | 0.5698 | 0.4361 | | 0.2649 | 4.0 | 4400 | 1.5065 | 0.5931 | 0.5467 | 0.5556 | 0.5381 | 0.4069 | | 0.2252 | 5.0 | 5500 | 1.2637 | 0.5766 | 0.6084 | 0.5261 | 0.7212 | 0.4234 | | 0.2234 | 6.0 | 6600 | 1.6854 | 0.5832 | 0.5419 | 0.5432 | 0.5405 | 0.4168 | | 0.2288 | 7.0 | 7700 | 1.7353 | 0.5985 | 0.5917 | 0.5517 | 0.6380 | 0.4015 | | 0.2008 | 8.0 | 8800 | 1.8444 | 0.6152 | 0.5693 | 0.5814 | 0.5577 | 0.3848 | | 0.1765 | 9.0 | 9900 | 2.4235 | 0.5833 | 0.5508 | 0.5417 | 0.5601 | 0.4167 | | 0.2334 | 10.0 | 11000 | 2.0034 | 0.6002 | 0.5635 | 0.5611 | 0.5659 | 0.3998 | | 0.1561 | 11.0 | 12100 | 2.3651 | 0.5897 | 0.5772 | 0.5445 | 0.6142 | 0.4103 | | 0.1759 | 12.0 | 13200 | 2.8745 | 0.5855 | 0.5742 | 0.5402 | 0.6128 | 0.4145 | | 0.1306 | 13.0 | 14300 | 2.7506 | 0.5904 | 0.5830 | 0.5442 | 0.6278 | 0.4096 | | 0.1443 | 14.0 | 15400 | 2.7292 | 0.6061 | 0.5549 | 0.5725 | 0.5383 | 0.3939 | | 0.1124 | 15.0 | 16500 | 2.6693 | 0.6119 | 0.5744 | 0.5745 | 0.5742 | 0.3881 | | 0.0886 | 16.0 | 17600 | 2.8332 | 0.6052 | 0.5708 | 0.5661 | 0.5756 | 0.3948 | | 0.078 | 17.0 | 18700 | 3.0623 | 0.6054 | 0.5693 | 0.5668 | 0.5718 | 0.3946 | | 0.0955 | 18.0 | 19800 | 3.1543 | 0.5965 | 0.5725 | 0.5538 | 0.5925 | 0.4035 | | 0.0689 | 19.0 | 20900 | 3.3443 | 0.5971 | 0.5763 | 0.5537 | 0.6009 | 0.4029 | | 0.0669 | 20.0 | 22000 | 3.2811 | 0.6022 | 0.5689 | 0.5624 | 0.5756 | 0.3978 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.9.0+cu111 - Datasets 2.3.2 - Tokenizers 0.12.1
Team-PIXEL/pixel-base-finetuned-pos-ud-coptic-scriptorium
8d4435231b62a5f5dadc410ce36fa03cb49c0296
2022-07-13T00:57:47.000Z
[ "pytorch", "pixel", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
Team-PIXEL
null
Team-PIXEL/pixel-base-finetuned-pos-ud-coptic-scriptorium
9
null
transformers
12,787
Entry not found
Team-PIXEL/pixel-base-finetuned-pos-ud-hindi-hdtb
6fe87800725ea263b73f4c5ab2af4662b1c21ac0
2022-07-13T01:07:13.000Z
[ "pytorch", "pixel", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
Team-PIXEL
null
Team-PIXEL/pixel-base-finetuned-pos-ud-hindi-hdtb
9
null
transformers
12,788
Entry not found
Team-PIXEL/pixel-base-finetuned-pos-ud-korean-gsd
d9c235941659e740600854ffbd6ab2298439cd9e
2022-07-13T01:20:24.000Z
[ "pytorch", "pixel", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
Team-PIXEL
null
Team-PIXEL/pixel-base-finetuned-pos-ud-korean-gsd
9
null
transformers
12,789
Entry not found
Hamzaaa/wav2vec2-base-finetuned-savee
7a208b18d5001d7b6314888f28d2c89f94cc1988
2022-07-13T12:15:40.000Z
[ "pytorch", "tensorboard", "wav2vec2", "audio-classification", "transformers" ]
audio-classification
false
Hamzaaa
null
Hamzaaa/wav2vec2-base-finetuned-savee
9
null
transformers
12,790
Entry not found
ahadda5/bart_wikikp_kp20k_openkp
be78ef6f81cfa7938ad43796554f0471a1a42bdd
2022-07-13T21:46:22.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
ahadda5
null
ahadda5/bart_wikikp_kp20k_openkp
9
null
transformers
12,791
Entry not found
CovRelex-SE/CORD19-BERT
3fd28418afb17bf92298fb0329bffd72618e3deb
2022-07-14T02:46:19.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
fill-mask
false
CovRelex-SE
null
CovRelex-SE/CORD19-BERT
9
null
transformers
12,792
--- tags: - generated_from_trainer model-index: - name: CORD19_BERT results: [] --- # CORD19-BERT ## How to use ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('CovRelex-SE/CORD19-BERT') model = BertModel.from_pretrained("CovRelex-SE/CORD19-BERT") text = "The virus can spread from an infected person’s mouth or nose." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
userGagan/segformer-b0-finetuned-segments-sidewalk-5
c202064f3c68198c9295722c18fbbbd3a087f144
2022-07-14T05:20:44.000Z
[ "pytorch", "segformer", "transformers" ]
null
false
userGagan
null
userGagan/segformer-b0-finetuned-segments-sidewalk-5
9
null
transformers
12,793
Entry not found
userGagan/segformer-b0-finetuned-segments-sidewalk-6
b03594077d1a61c997a3a50e05439d80846c25ff
2022-07-14T06:43:18.000Z
[ "pytorch", "tensorboard", "segformer", "transformers" ]
null
false
userGagan
null
userGagan/segformer-b0-finetuned-segments-sidewalk-6
9
null
transformers
12,794
Entry not found
Team-PIXEL/pixel-base-finetuned-korquadv1
4d8f037c2d6bd37991d5a82f04f24006f54652b0
2022-07-14T15:58:12.000Z
[ "pytorch", "pixel", "question-answering", "dataset:squad_kor_v1", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
Team-PIXEL
null
Team-PIXEL/pixel-base-finetuned-korquadv1
9
null
transformers
12,795
--- tags: - generated_from_trainer datasets: - squad_kor_v1 model-index: - name: pixel-base-finetuned-korquadv1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pixel-base-finetuned-korquadv1 This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the squad_kor_v1 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 45 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 20000 - mixed_precision_training: Apex, opt level O1 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0 - Datasets 2.0.0 - Tokenizers 0.12.1
Team-PIXEL/pixel-base-finetuned-masakhaner-pcm
75af0e7488c6e873557ad479b27beb021ca80768
2022-07-15T03:28:01.000Z
[ "pytorch", "pixel", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
Team-PIXEL
null
Team-PIXEL/pixel-base-finetuned-masakhaner-pcm
9
null
transformers
12,796
Entry not found
furrutiav/beto_bi_purpose
238523441df3e96aed72087fe6b59f77b3badaa8
2022-07-20T19:23:43.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
furrutiav
null
furrutiav/beto_bi_purpose
9
null
transformers
12,797
Entry not found
worknick/bert-base-cased-finetuned-conll2003
f28f6abccc94df1c5b80c47b6e3c264008451f46
2022-07-15T05:56:36.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
worknick
null
worknick/bert-base-cased-finetuned-conll2003
9
null
transformers
12,798
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-cased-finetuned-conll2003 results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9409771754636234 - name: Recall type: recall value: 0.946886775524852 - name: F1 type: f1 value: 0.9439227260531259 - name: Accuracy type: accuracy value: 0.9859745687878966 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-conll2003 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0643 - Precision: 0.9410 - Recall: 0.9469 - F1: 0.9439 - Accuracy: 0.9860 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2349 | 0.57 | 500 | 0.0885 | 0.8957 | 0.8980 | 0.8968 | 0.9747 | | 0.0822 | 1.14 | 1000 | 0.0774 | 0.9184 | 0.9219 | 0.9202 | 0.9802 | | 0.0476 | 1.71 | 1500 | 0.0683 | 0.9345 | 0.9325 | 0.9335 | 0.9833 | | 0.0368 | 2.28 | 2000 | 0.0653 | 0.9333 | 0.9430 | 0.9381 | 0.9847 | | 0.028 | 2.85 | 2500 | 0.0670 | 0.9279 | 0.9342 | 0.9311 | 0.9835 | | 0.0171 | 3.42 | 3000 | 0.0643 | 0.9410 | 0.9469 | 0.9439 | 0.9860 | | 0.0149 | 3.99 | 3500 | 0.0667 | 0.9369 | 0.9477 | 0.9422 | 0.9856 | | 0.0088 | 4.56 | 4000 | 0.0698 | 0.9360 | 0.9473 | 0.9416 | 0.9855 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
valhalla/vqgan_imagenet_f16_16384
bd05189dba2dacbb9c8335dd46686ea38873c2b5
2022-07-25T14:57:11.000Z
[ "pytorch", "transformers" ]
null
false
valhalla
null
valhalla/vqgan_imagenet_f16_16384
9
null
transformers
12,799
Entry not found