modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-02 00:43:14
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
461 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-02 00:42:27
card
stringlengths
11
1.01M
hopkins/eng-deu-union
hopkins
2023-07-08T12:29:59Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T12:11:50Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-deu-union results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-deu-union This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6328 - Bleu: 21.3888 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
abhi-8/DialoGPT-medium-Michael
abhi-8
2023-07-08T12:29:13Z
134
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-08T08:44:38Z
--- pipeline_tag: conversational ---
abhi-8/DialoGPT-medium-Joshua-twevy
abhi-8
2023-07-08T12:27:10Z
149
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-08T09:41:50Z
--- license: mit pipeline_tag: conversational ---
magnustragardh/speecht5_finetuned_voxpopuli_nl
magnustragardh
2023-07-08T11:58:23Z
75
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "dataset:voxpopuli", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-07-08T09:04:21Z
--- license: mit tags: - generated_from_trainer datasets: - voxpopuli model-index: - name: speecht5_finetuned_voxpopuli_nl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_nl This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4598 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5211 | 4.3 | 1000 | 0.4802 | | 0.4963 | 8.61 | 2000 | 0.4655 | | 0.4956 | 12.91 | 3000 | 0.4626 | | 0.4936 | 17.21 | 4000 | 0.4598 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
RogerB/KinyaBERT-small-finetuned-kintweetsC
RogerB
2023-07-08T11:53:04Z
115
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-08T11:47:27Z
--- tags: - generated_from_trainer model-index: - name: KinyaBERT-small-finetuned-kintweetsC results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # KinyaBERT-small-finetuned-kintweetsC This model is a fine-tuned version of [jean-paul/KinyaBERT-small](https://huggingface.co/jean-paul/KinyaBERT-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.3695 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.8662 | 1.0 | 750 | 4.5594 | | 4.5576 | 2.0 | 1500 | 4.3643 | | 4.4323 | 3.0 | 2250 | 4.3253 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
mouaadblhn/q-FrozenLake-v1-4x4-noSlippery
mouaadblhn
2023-07-08T11:40:45Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T11:40:44Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="mouaadblhn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
itslogannye/benignEnchondroma-vs-lowGradeMalignantChondrosarcoma-histopathology
itslogannye
2023-07-08T11:39:06Z
227
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "autotrain", "vision", "dataset:logannyeMD/autotrain-data-enchondroma-vs-low-grade-chondrosarcoma-histology", "license:apache-2.0", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-01-19T13:25:30Z
--- tags: - autotrain - vision - image-classification datasets: - logannyeMD/autotrain-data-enchondroma-vs-low-grade-chondrosarcoma-histology widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 3.6593488665934646 license: apache-2.0 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 2962985627 - CO2 Emissions (in grams): 3.6593 ## Validation Metrics - Loss: 0.229 - Accuracy: 0.887 - Precision: 0.939 - Recall: 0.821 - AUC: 0.969 - F1: 0.876
jkraushaar/distilbert-base-uncased-finetuned-emotion
jkraushaar
2023-07-08T11:31:58Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T18:05:42Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.9245071578761553 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2093 - Accuracy: 0.9245 - F1: 0.9245 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.2993 | 0.91 | 0.9084 | | No log | 2.0 | 500 | 0.2093 | 0.9245 | 0.9245 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
nopperl/alpaca-lora-7b-german-base-51k-ggml
nopperl
2023-07-08T11:06:41Z
7
5
transformers
[ "transformers", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-10T22:54:33Z
--- license: apache-2.0 --- <p align="center" width="100%"> <img src="https://huggingface.co/nopperl/alpaca-lora-7b-german-base-51k-ggml/raw/main/zicklein-ggml.jpg" alt="a lean, scrawny llama at the oktoberfest" style="width: 20%; min-width: 300px; display: block; margin: auto;"> </p> # Zicklein-GGML GGML conversion of [Zicklein](https://github.com/avocardio/zicklein) (a German [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) LoRa for [LLaMA](https://github.com/facebookresearch/llama)). Compatible with [llama.cpp](https://github.com/ggerganov/llama.cpp) version master-2d43387 or later. See [Alpaca](https://github.com/tatsu-lab/stanford_alpaca#data-release) for instructions on how to prompt the model. More information about the conversion process is in this [git repo](https://github.com/nopperl/Zicklein-GGML).
jayanta/microsoft-resnet-50-cartoon-emotion-detection
jayanta
2023-07-08T11:03:28Z
330
3
transformers
[ "transformers", "pytorch", "tensorboard", "resnet", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-01-21T11:44:53Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy - precision - recall - f1 model-index: - name: microsoft-resnet-50-cartoon-emotion-detection results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8165137614678899 - name: Precision type: precision value: 0.8181998512273742 - name: Recall type: recall value: 0.8165137614678899 - name: F1 type: f1 value: 0.8172526992448356 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # microsoft-resnet-50-cartoon-emotion-detection This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4801 - Accuracy: 0.8165 - Precision: 0.8182 - Recall: 0.8165 - F1: 0.8173 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00012 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 80 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 0.97 | 8 | 1.3855 | 0.2294 | 0.2697 | 0.2294 | 0.2165 | | 1.4222 | 1.97 | 16 | 1.3792 | 0.2569 | 0.2808 | 0.2569 | 0.2543 | | 1.4183 | 2.97 | 24 | 1.3646 | 0.3853 | 0.4102 | 0.3853 | 0.3511 | | 1.4097 | 3.97 | 32 | 1.3563 | 0.4128 | 0.5062 | 0.4128 | 0.3245 | | 1.3944 | 4.97 | 40 | 1.3462 | 0.4037 | 0.3927 | 0.4037 | 0.2939 | | 1.3944 | 5.97 | 48 | 1.3223 | 0.4037 | 0.5152 | 0.4037 | 0.2841 | | 1.411 | 6.97 | 56 | 1.3040 | 0.4128 | 0.4404 | 0.4128 | 0.2985 | | 1.346 | 7.97 | 64 | 1.2700 | 0.4954 | 0.4960 | 0.4954 | 0.4093 | | 1.3031 | 8.97 | 72 | 1.2150 | 0.5596 | 0.5440 | 0.5596 | 0.4672 | | 1.2371 | 9.97 | 80 | 1.1580 | 0.5963 | 0.5659 | 0.5963 | 0.5101 | | 1.2371 | 10.97 | 88 | 1.0670 | 0.6055 | 0.7279 | 0.6055 | 0.5211 | | 1.1736 | 11.97 | 96 | 0.9856 | 0.6606 | 0.5537 | 0.6606 | 0.5772 | | 1.0457 | 12.97 | 104 | 0.8963 | 0.6697 | 0.7631 | 0.6697 | 0.5965 | | 0.953 | 13.97 | 112 | 0.8547 | 0.6697 | 0.6885 | 0.6697 | 0.6081 | | 0.8579 | 14.97 | 120 | 0.7849 | 0.7156 | 0.7396 | 0.7156 | 0.6643 | | 0.8579 | 15.97 | 128 | 0.7564 | 0.7431 | 0.7372 | 0.7431 | 0.7119 | | 0.8167 | 16.97 | 136 | 0.7133 | 0.7615 | 0.7507 | 0.7615 | 0.7211 | | 0.7273 | 17.97 | 144 | 0.6888 | 0.7523 | 0.7379 | 0.7523 | 0.7202 | | 0.6547 | 18.97 | 152 | 0.6592 | 0.7798 | 0.7773 | 0.7798 | 0.7577 | | 0.5963 | 19.97 | 160 | 0.6136 | 0.7706 | 0.7642 | 0.7706 | 0.7551 | | 0.5963 | 20.97 | 168 | 0.5723 | 0.7890 | 0.7802 | 0.7890 | 0.7787 | | 0.551 | 21.97 | 176 | 0.5686 | 0.7890 | 0.7761 | 0.7890 | 0.7781 | | 0.4929 | 22.97 | 184 | 0.5597 | 0.7706 | 0.7649 | 0.7706 | 0.7651 | | 0.4309 | 23.97 | 192 | 0.5234 | 0.7890 | 0.7774 | 0.7890 | 0.7810 | | 0.3945 | 24.97 | 200 | 0.5008 | 0.7890 | 0.7837 | 0.7890 | 0.7813 | | 0.3945 | 25.97 | 208 | 0.5289 | 0.7523 | 0.7537 | 0.7523 | 0.7529 | | 0.3704 | 26.97 | 216 | 0.4399 | 0.7982 | 0.7958 | 0.7982 | 0.7963 | | 0.3267 | 27.97 | 224 | 0.4539 | 0.8073 | 0.7983 | 0.8073 | 0.8005 | | 0.2966 | 28.97 | 232 | 0.4735 | 0.7798 | 0.7892 | 0.7798 | 0.7837 | | 0.2645 | 29.97 | 240 | 0.4594 | 0.7706 | 0.7706 | 0.7706 | 0.7706 | | 0.2645 | 30.97 | 248 | 0.4699 | 0.7523 | 0.7554 | 0.7523 | 0.7533 | | 0.2527 | 31.97 | 256 | 0.4551 | 0.7890 | 0.7856 | 0.7890 | 0.7857 | | 0.2202 | 32.97 | 264 | 0.4458 | 0.8165 | 0.8198 | 0.8165 | 0.8170 | | 0.2006 | 33.97 | 272 | 0.4632 | 0.7798 | 0.7941 | 0.7798 | 0.7850 | | 0.1589 | 34.97 | 280 | 0.4651 | 0.7890 | 0.7993 | 0.7890 | 0.7925 | | 0.1589 | 35.97 | 288 | 0.4595 | 0.7798 | 0.7824 | 0.7798 | 0.7804 | | 0.153 | 36.97 | 296 | 0.4584 | 0.7615 | 0.7691 | 0.7615 | 0.7633 | | 0.1427 | 37.97 | 304 | 0.4608 | 0.7798 | 0.7830 | 0.7798 | 0.7796 | | 0.113 | 38.97 | 312 | 0.4571 | 0.7890 | 0.7922 | 0.7890 | 0.7899 | | 0.1146 | 39.97 | 320 | 0.5270 | 0.7615 | 0.7651 | 0.7615 | 0.7613 | | 0.1146 | 40.97 | 328 | 0.4888 | 0.7706 | 0.7782 | 0.7706 | 0.7710 | | 0.1275 | 41.97 | 336 | 0.4523 | 0.7890 | 0.7809 | 0.7890 | 0.7837 | | 0.0959 | 42.97 | 344 | 0.4697 | 0.7798 | 0.7753 | 0.7798 | 0.7767 | | 0.0882 | 43.97 | 352 | 0.4286 | 0.7706 | 0.7686 | 0.7706 | 0.7686 | | 0.0847 | 44.97 | 360 | 0.5317 | 0.7890 | 0.7993 | 0.7890 | 0.7925 | | 0.0847 | 45.97 | 368 | 0.5431 | 0.7615 | 0.7700 | 0.7615 | 0.7647 | | 0.0813 | 46.97 | 376 | 0.4432 | 0.8257 | 0.8435 | 0.8257 | 0.8284 | | 0.0768 | 47.97 | 384 | 0.4886 | 0.7982 | 0.8005 | 0.7982 | 0.7956 | | 0.0627 | 48.97 | 392 | 0.5373 | 0.7982 | 0.8072 | 0.7982 | 0.8010 | | 0.0688 | 49.97 | 400 | 0.5897 | 0.7798 | 0.7892 | 0.7798 | 0.7822 | | 0.0688 | 50.97 | 408 | 0.5115 | 0.7982 | 0.8015 | 0.7982 | 0.7992 | | 0.0676 | 51.97 | 416 | 0.4881 | 0.7982 | 0.7998 | 0.7982 | 0.7978 | | 0.0539 | 52.97 | 424 | 0.4820 | 0.8073 | 0.8139 | 0.8073 | 0.8077 | | 0.0596 | 53.97 | 432 | 0.4450 | 0.8257 | 0.8246 | 0.8257 | 0.8244 | | 0.0611 | 54.97 | 440 | 0.5057 | 0.7890 | 0.8008 | 0.7890 | 0.7924 | | 0.0611 | 55.97 | 448 | 0.4918 | 0.7982 | 0.8056 | 0.7982 | 0.8008 | | 0.0643 | 56.97 | 456 | 0.5946 | 0.7523 | 0.7587 | 0.7523 | 0.7545 | | 0.0605 | 57.97 | 464 | 0.4888 | 0.8073 | 0.8239 | 0.8073 | 0.8121 | | 0.063 | 58.97 | 472 | 0.5917 | 0.7890 | 0.8051 | 0.7890 | 0.7937 | | 0.0595 | 59.97 | 480 | 0.5117 | 0.7890 | 0.7904 | 0.7890 | 0.7894 | | 0.0595 | 60.97 | 488 | 0.5497 | 0.7615 | 0.7692 | 0.7615 | 0.7635 | | 0.0554 | 61.97 | 496 | 0.4742 | 0.8165 | 0.8101 | 0.8165 | 0.8126 | | 0.0557 | 62.97 | 504 | 0.5369 | 0.7890 | 0.7886 | 0.7890 | 0.7886 | | 0.0539 | 63.97 | 512 | 0.5440 | 0.7890 | 0.7922 | 0.7890 | 0.7899 | | 0.048 | 64.97 | 520 | 0.5924 | 0.7890 | 0.7878 | 0.7890 | 0.7883 | | 0.048 | 65.97 | 528 | 0.4863 | 0.8440 | 0.8440 | 0.8440 | 0.8440 | | 0.045 | 66.97 | 536 | 0.5850 | 0.8073 | 0.8076 | 0.8073 | 0.8047 | | 0.047 | 67.97 | 544 | 0.4939 | 0.8257 | 0.8212 | 0.8257 | 0.8227 | | 0.0412 | 68.97 | 552 | 0.4850 | 0.7890 | 0.7912 | 0.7890 | 0.7900 | | 0.0392 | 69.97 | 560 | 0.5066 | 0.8257 | 0.8265 | 0.8257 | 0.8258 | | 0.0392 | 70.97 | 568 | 0.4965 | 0.8073 | 0.8053 | 0.8073 | 0.8058 | | 0.0423 | 71.97 | 576 | 0.4717 | 0.8349 | 0.8376 | 0.8349 | 0.8351 | | 0.0471 | 72.97 | 584 | 0.4845 | 0.8257 | 0.8378 | 0.8257 | 0.8296 | | 0.0322 | 73.97 | 592 | 0.5188 | 0.7706 | 0.7689 | 0.7706 | 0.7693 | | 0.042 | 74.97 | 600 | 0.5242 | 0.7706 | 0.7699 | 0.7706 | 0.7701 | | 0.042 | 75.97 | 608 | 0.5945 | 0.7798 | 0.7824 | 0.7798 | 0.7804 | | 0.0416 | 76.97 | 616 | 0.5432 | 0.7982 | 0.8038 | 0.7982 | 0.7993 | | 0.0399 | 77.97 | 624 | 0.5381 | 0.7982 | 0.8072 | 0.7982 | 0.7994 | | 0.0439 | 78.97 | 632 | 0.6181 | 0.7798 | 0.7878 | 0.7798 | 0.7827 | | 0.0462 | 79.97 | 640 | 0.4801 | 0.8165 | 0.8182 | 0.8165 | 0.8173 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu117 - Datasets 2.8.0 - Tokenizers 0.11.0
Xanadu00/galaxy_classifier_mobilevit_3
Xanadu00
2023-07-08T10:57:13Z
64
0
transformers
[ "transformers", "tf", "mobilevit", "image-classification", "generated_from_keras_callback", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-08T06:25:01Z
--- license: other tags: - generated_from_keras_callback model-index: - name: Xanadu00/galaxy_classifier_mobilevit_3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Xanadu00/galaxy_classifier_mobilevit_3 This model is a fine-tuned version of [apple/mobilevit-small](https://huggingface.co/apple/mobilevit-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1914 - Train Accuracy: 0.9341 - Validation Loss: 0.5148 - Validation Accuracy: 0.8512 - Epoch: 16 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamW', 'weight_decay': 0.01, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'ExponentialDecay', 'config': {'initial_learning_rate': 0.002, 'decay_steps': 10000, 'decay_rate': 0.01, 'staircase': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 1.1049 | 0.6128 | 0.7422 | 0.7517 | 0 | | 0.7149 | 0.7564 | 0.6376 | 0.7821 | 1 | | 0.6080 | 0.7945 | 0.6947 | 0.7745 | 2 | | 0.5376 | 0.8160 | 0.5589 | 0.8134 | 3 | | 0.4977 | 0.8279 | 0.5458 | 0.8162 | 4 | | 0.4564 | 0.8407 | 0.4799 | 0.8441 | 5 | | 0.4271 | 0.8557 | 0.4765 | 0.8413 | 6 | | 0.3957 | 0.8619 | 0.4790 | 0.8453 | 7 | | 0.3701 | 0.8741 | 0.5376 | 0.8329 | 8 | | 0.3425 | 0.8829 | 0.4359 | 0.8619 | 9 | | 0.3192 | 0.8892 | 0.4475 | 0.8585 | 10 | | 0.2972 | 0.8967 | 0.4143 | 0.8712 | 11 | | 0.2691 | 0.9080 | 0.4819 | 0.8498 | 12 | | 0.2445 | 0.9144 | 0.4543 | 0.8563 | 13 | | 0.2261 | 0.9220 | 0.4221 | 0.8689 | 14 | | 0.2127 | 0.9251 | 0.5076 | 0.8540 | 15 | | 0.1914 | 0.9341 | 0.5148 | 0.8512 | 16 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Nour33/t5-small-finetuned-samsum
Nour33
2023-07-08T10:52:03Z
106
0
transformers
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-02-03T21:32:04Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: t5-small-finetuned-samsum results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-samsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.7087 - Validation Loss: 1.6756 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 14728, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.1000 | 1.7915 | 0 | | 1.9259 | 1.7424 | 1 | | 1.8512 | 1.7167 | 2 | | 1.8005 | 1.6925 | 3 | | 1.7655 | 1.6840 | 4 | | 1.7392 | 1.6799 | 5 | | 1.7204 | 1.6757 | 6 | | 1.7087 | 1.6756 | 7 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
lordsauron/dqn-SpaceInvadersNoFrameskip-v4
lordsauron
2023-07-08T10:48:11Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T10:47:32Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 635.00 +/- 249.91 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lordsauron -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lordsauron -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga lordsauron ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Aryapjr14/Dream-world
Aryapjr14
2023-07-08T10:41:52Z
0
0
null
[ "art", "Anime", "Sexy", "2.5D", "text-to-image", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-06-16T10:56:18Z
--- license: creativeml-openrail-m pipeline_tag: text-to-image tags: - art - Anime - Sexy - 2.5D ---
mpetrikov/Pixelcopter-PLE-v0
mpetrikov
2023-07-08T10:17:17Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T22:48:25Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 34.90 +/- 29.17 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Sukmin/dqn-SpaceInvadersNoFrameskip-v4
Sukmin
2023-07-08T10:11:41Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T10:10:52Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 565.50 +/- 178.22 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sukmin -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sukmin -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Sukmin ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
susnato/whisper-tiny-en-minds14_2
susnato
2023-07-08T10:08:34Z
84
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-08T10:06:15Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: Whisper Tiny results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Minds 14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.3919716646989374 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Minds 14 dataset. It achieves the following results on the evaluation set: - Loss: 0.8095 - Wer Ortho: 0.4257 - Wer: 0.3920 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.354 | 1.0 | 15 | 0.8095 | 0.4257 | 0.3920 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 1.13.1 - Datasets 2.13.1 - Tokenizers 0.13.2
raygx/Nepali-GPT2-CausalLM
raygx
2023-07-08T10:03:57Z
61
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-29T04:57:51Z
--- tags: - generated_from_keras_callback model-index: - name: Nepali-GPT2-CausalLM results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Nepali-GPT2-CausalLM This model is a fine-tuned version of [raygx/Nepali-GPT2-CausalLM](https://huggingface.co/raygx/Nepali-GPT2-CausalLM) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.7022 - Validation Loss: 4.6237 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.8141 | 4.6678 | 0 | | 4.7022 | 4.6237 | 1 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.11.0 - Datasets 2.1.0 - Tokenizers 0.13.3
Khushnur/t5-base-end2end-questions-generation_squad_aug
Khushnur
2023-07-08T09:46:13Z
161
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-08T08:11:31Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-base-end2end-questions-generation_squad_aug results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-end2end-questions-generation_squad_aug This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0874 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9281 | 0.25 | 100 | 3.0443 | | 1.7378 | 0.5 | 200 | 3.0395 | | 1.6719 | 0.76 | 300 | 3.0509 | | 1.6495 | 1.01 | 400 | 3.0564 | | 1.572 | 1.26 | 500 | 3.0780 | | 1.5609 | 1.51 | 600 | 3.0569 | | 1.5684 | 1.76 | 700 | 3.0696 | | 1.5579 | 2.01 | 800 | 3.0729 | | 1.5017 | 2.27 | 900 | 3.0898 | | 1.5079 | 2.52 | 1000 | 3.0879 | | 1.503 | 2.77 | 1100 | 3.0874 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
NasimB/gpt2-concat-bnc-rarity-12k-1p5k
NasimB
2023-07-08T09:39:25Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-08T07:44:06Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-bnc-rarity-12k-1p5k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-bnc-rarity-12k-1p5k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1872 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7337 | 0.29 | 500 | 5.6373 | | 5.3734 | 0.59 | 1000 | 5.1990 | | 5.0255 | 0.88 | 1500 | 4.9588 | | 4.7542 | 1.18 | 2000 | 4.7996 | | 4.593 | 1.47 | 2500 | 4.6785 | | 4.4842 | 1.76 | 3000 | 4.5724 | | 4.353 | 2.06 | 3500 | 4.4943 | | 4.1666 | 2.35 | 4000 | 4.4439 | | 4.1294 | 2.65 | 4500 | 4.3928 | | 4.0879 | 2.94 | 5000 | 4.3360 | | 3.8794 | 3.23 | 5500 | 4.3322 | | 3.8264 | 3.53 | 6000 | 4.3009 | | 3.8139 | 3.82 | 6500 | 4.2684 | | 3.6919 | 4.12 | 7000 | 4.2740 | | 3.542 | 4.41 | 7500 | 4.2658 | | 3.5326 | 4.7 | 8000 | 4.2494 | | 3.5195 | 5.0 | 8500 | 4.2370 | | 3.3414 | 5.29 | 9000 | 4.2524 | | 3.3457 | 5.58 | 9500 | 4.2512 | | 3.3385 | 5.88 | 10000 | 4.2500 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
imdanboy/kss_jets
imdanboy
2023-07-08T09:29:54Z
0
0
espnet
[ "espnet", "audio", "text-to-speech", "ko", "dataset:kss", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2023-07-08T09:27:00Z
--- tags: - espnet - audio - text-to-speech language: ko datasets: - kss license: cc-by-4.0 --- ## ESPnet2 TTS model ### `imdanboy/kss_jets` This model was trained by imdanboy using kss recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 967ddbed826a7c90b75be2a7129588442d5cb6af pip install -e . cd egs2/kss/tts1 ./run.sh --skip_data_prep false --skip_train true --download_model imdanboy/kss_jets ``` ## TTS config <details><summary>expand</summary> ``` config: conf/tuning/train_jets.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/tts_train_jets_raw_phn_g2pk_no_space ngpu: 1 seed: 777 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 51627 dist_launcher: null multiprocessing_distributed: true unused_parameters: true sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: false collect_stats: false write_collected_feats: false max_epoch: 1000 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - text2mel_loss - min - - train - text2mel_loss - min - - train - total_count - max keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: -1 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: 50 use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 1000 batch_size: 20 valid_batch_size: null batch_bins: 4500000 valid_batch_bins: null train_shape_file: - exp/tts_stats_raw_phn_g2pk_no_space/train/text_shape.phn - exp/tts_stats_raw_phn_g2pk_no_space/train/speech_shape valid_shape_file: - exp/tts_stats_raw_phn_g2pk_no_space/valid/text_shape.phn - exp/tts_stats_raw_phn_g2pk_no_space/valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 204800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 chunk_excluded_key_prefixes: [] train_data_path_and_name_and_type: - - dump/raw/tr_no_dev/text - text - text - - dump/raw/tr_no_dev/wav.scp - speech - sound - - exp/tts_stats_raw_phn_g2pk_no_space/train/collect_feats/pitch.scp - pitch - npy - - exp/tts_stats_raw_phn_g2pk_no_space/train/collect_feats/energy.scp - energy - npy valid_data_path_and_name_and_type: - - dump/raw/dev/text - text - text - - dump/raw/dev/wav.scp - speech - sound - - exp/tts_stats_raw_phn_g2pk_no_space/valid/collect_feats/pitch.scp - pitch - npy - - exp/tts_stats_raw_phn_g2pk_no_space/valid/collect_feats/energy.scp - energy - npy allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null exclude_weight_decay: false exclude_weight_decay_conf: {} optim: adamw optim_conf: lr: 0.0002 betas: - 0.8 - 0.99 eps: 1.0e-09 weight_decay: 0.0 scheduler: exponentiallr scheduler_conf: gamma: 0.999875 optim2: adamw optim2_conf: lr: 0.0002 betas: - 0.8 - 0.99 eps: 1.0e-09 weight_decay: 0.0 scheduler2: exponentiallr scheduler2_conf: gamma: 0.999875 generator_first: true token_list: - <blank> - <unk> - ᅡ - ᅵ - ᄋ - ᅳ - ᄀ - ᅥ - ᄂ - ᆫ - ᄅ - ᄌ - ᄉ - ᅩ - ᆯ - ᄆ - . - ᅮ - ᄃ - ᄒ - ᅦ - ᆼ - ᅢ - ᄇ - ᅭ - ᅧ - ᄊ - ᆷ - ᄄ - ᆮ - ᄎ - ᄁ - ᆨ - ᄑ - ᄐ - ᅪ - ᄏ - '?' - ᄍ - ᆸ - ᅬ - ᅣ - ᅴ - ᅯ - ᅨ - ᄈ - ᅱ - ᅲ - ᅫ - ',' - '!' - ᅤ - ':' - ᅰ - '''' - '-' - '"' - / - I - M - F - E - S - C - A - B - ㅇ - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: null g2p: g2pk_no_space feats_extract: fbank feats_extract_conf: n_fft: 1024 hop_length: 256 win_length: null fs: 24000 fmin: 80 fmax: 7600 n_mels: 80 normalize: global_mvn normalize_conf: stats_file: exp/tts_stats_raw_phn_g2pk_no_space/train/feats_stats.npz tts: jets tts_conf: generator_type: jets_generator generator_params: adim: 256 aheads: 2 elayers: 4 eunits: 1024 dlayers: 4 dunits: 1024 positionwise_layer_type: conv1d positionwise_conv_kernel_size: 3 duration_predictor_layers: 2 duration_predictor_chans: 256 duration_predictor_kernel_size: 3 use_masking: true encoder_normalize_before: true decoder_normalize_before: true encoder_type: transformer decoder_type: transformer conformer_rel_pos_type: latest conformer_pos_enc_layer_type: rel_pos conformer_self_attn_layer_type: rel_selfattn conformer_activation_type: swish use_macaron_style_in_conformer: true use_cnn_in_conformer: true conformer_enc_kernel_size: 7 conformer_dec_kernel_size: 31 init_type: xavier_uniform transformer_enc_dropout_rate: 0.2 transformer_enc_positional_dropout_rate: 0.2 transformer_enc_attn_dropout_rate: 0.2 transformer_dec_dropout_rate: 0.2 transformer_dec_positional_dropout_rate: 0.2 transformer_dec_attn_dropout_rate: 0.2 pitch_predictor_layers: 5 pitch_predictor_chans: 256 pitch_predictor_kernel_size: 5 pitch_predictor_dropout: 0.5 pitch_embed_kernel_size: 1 pitch_embed_dropout: 0.0 stop_gradient_from_pitch_predictor: true energy_predictor_layers: 2 energy_predictor_chans: 256 energy_predictor_kernel_size: 3 energy_predictor_dropout: 0.5 energy_embed_kernel_size: 1 energy_embed_dropout: 0.0 stop_gradient_from_energy_predictor: false generator_out_channels: 1 generator_channels: 512 generator_global_channels: -1 generator_kernel_size: 7 generator_upsample_scales: - 8 - 8 - 2 - 2 generator_upsample_kernel_sizes: - 16 - 16 - 4 - 4 generator_resblock_kernel_sizes: - 3 - 7 - 11 generator_resblock_dilations: - - 1 - 3 - 5 - - 1 - 3 - 5 - - 1 - 3 - 5 generator_use_additional_convs: true generator_bias: true generator_nonlinear_activation: LeakyReLU generator_nonlinear_activation_params: negative_slope: 0.1 generator_use_weight_norm: true segment_size: 32 idim: 68 odim: 80 discriminator_type: hifigan_multi_scale_multi_period_discriminator discriminator_params: scales: 1 scale_downsample_pooling: AvgPool1d scale_downsample_pooling_params: kernel_size: 4 stride: 2 padding: 2 scale_discriminator_params: in_channels: 1 out_channels: 1 kernel_sizes: - 15 - 41 - 5 - 3 channels: 128 max_downsample_channels: 1024 max_groups: 16 bias: true downsample_scales: - 2 - 2 - 4 - 4 - 1 nonlinear_activation: LeakyReLU nonlinear_activation_params: negative_slope: 0.1 use_weight_norm: true use_spectral_norm: false follow_official_norm: false periods: - 2 - 3 - 5 - 7 - 11 period_discriminator_params: in_channels: 1 out_channels: 1 kernel_sizes: - 5 - 3 channels: 32 downsample_scales: - 3 - 3 - 3 - 3 - 1 max_downsample_channels: 1024 bias: true nonlinear_activation: LeakyReLU nonlinear_activation_params: negative_slope: 0.1 use_weight_norm: true use_spectral_norm: false generator_adv_loss_params: average_by_discriminators: false loss_type: mse discriminator_adv_loss_params: average_by_discriminators: false loss_type: mse feat_match_loss_params: average_by_discriminators: false average_by_layers: false include_final_outputs: true mel_loss_params: fs: 24000 n_fft: 1024 hop_length: 256 win_length: null window: hann n_mels: 80 fmin: 0 fmax: null log_base: null lambda_adv: 1.0 lambda_mel: 45.0 lambda_feat_match: 2.0 lambda_var: 1.0 lambda_align: 1.0 sampling_rate: 24000 cache_generator_outputs: true pitch_extract: dio pitch_extract_conf: reduction_factor: 1 use_token_averaged_f0: false fs: 24000 n_fft: 1024 hop_length: 256 f0max: 400 f0min: 80 pitch_normalize: global_mvn pitch_normalize_conf: stats_file: exp/tts_stats_raw_phn_g2pk_no_space/train/pitch_stats.npz energy_extract: energy energy_extract_conf: reduction_factor: 1 use_token_averaged_energy: false fs: 24000 n_fft: 1024 hop_length: 256 win_length: null energy_normalize: global_mvn energy_normalize_conf: stats_file: exp/tts_stats_raw_phn_g2pk_no_space/train/energy_stats.npz required: - output_dir - token_list version: '202304' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ruyaka/ppo-Huggy
ruyaka
2023-07-08T08:46:50Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-08T08:46:44Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: ruyaka/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
devan666dewa/roop
devan666dewa
2023-07-08T08:34:50Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-08T08:34:50Z
--- license: creativeml-openrail-m ---
rdmpage/autotrain-lasiocampidae-73081139111
rdmpage
2023-07-08T08:15:33Z
182
0
transformers
[ "transformers", "pytorch", "safetensors", "swin", "image-classification", "autotrain", "vision", "dataset:rdmpage/autotrain-data-lasiocampidae", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-08T08:09:21Z
--- tags: - autotrain - vision - image-classification datasets: - rdmpage/autotrain-data-lasiocampidae widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 2.232916388389464 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 73081139111 - CO2 Emissions (in grams): 2.2329 ## Validation Metrics - Loss: 0.365 - Accuracy: 0.871 - Macro F1: 0.824 - Micro F1: 0.871 - Weighted F1: 0.865 - Macro Precision: 0.898 - Micro Precision: 0.871 - Weighted Precision: 0.874 - Macro Recall: 0.796 - Micro Recall: 0.871 - Weighted Recall: 0.871
Olegiy/q-Taxi-v3
Olegiy
2023-07-08T07:35:52Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T07:35:50Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.44 +/- 2.84 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Olegiy/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Olegiy/qFrozenLakev14x4noSlippery
Olegiy
2023-07-08T07:33:49Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T07:33:46Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: qFrozenLakev14x4noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Olegiy/qFrozenLakev14x4noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
hongrui/chest_v_1
hongrui
2023-07-08T07:33:43Z
2
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-03T23:39:03Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - hongrui/chest_v_1 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the hongrui/xray_v_1 dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
wytrnyte/q-learning-taxi-v3
wytrnyte
2023-07-08T07:24:16Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T07:24:15Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-learning-taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="wytrnyte/q-learning-taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
lordsauron/q-FrozenLake-v1-4x4-noSlippery
lordsauron
2023-07-08T07:13:46Z
0
1
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T07:13:44Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="lordsauron/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
mrizalf7/xlm-r-qa-squad2.0-squad-1.1-unmerged
mrizalf7
2023-07-08T06:20:59Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2023-07-06T14:37:09Z
--- license: mit tags: - generated_from_trainer model-index: - name: xlm-r-qa-squad2.0-squad-1.1-unmerged results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-r-qa-squad2.0-squad-1.1-unmerged This model is a fine-tuned version of [mrizalf7/xlm-r-qa-squad-2.0](https://huggingface.co/mrizalf7/xlm-r-qa-squad-2.0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2060 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.9127 | 1.0 | 636 | 3.2060 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
ridwanlekan/layoutlm-funsd
ridwanlekan
2023-07-08T05:12:24Z
75
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlm", "token-classification", "generated_from_trainer", "dataset:funsd", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-08T04:27:40Z
--- tags: - generated_from_trainer datasets: - funsd model-index: - name: layoutlm-funsd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlm-funsd This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset. It achieves the following results on the evaluation set: - Loss: 0.6659 - Answer: {'precision': 0.7130434782608696, 'recall': 0.8108776266996292, 'f1': 0.7588201272411799, 'number': 809} - Header: {'precision': 0.30578512396694213, 'recall': 0.31092436974789917, 'f1': 0.30833333333333335, 'number': 119} - Question: {'precision': 0.7858407079646018, 'recall': 0.8338028169014085, 'f1': 0.8091116173120729, 'number': 1065} - Overall Precision: 0.7282 - Overall Recall: 0.7933 - Overall F1: 0.7594 - Overall Accuracy: 0.8113 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 1.7894 | 1.0 | 10 | 1.6087 | {'precision': 0.022050716648291068, 'recall': 0.024721878862793572, 'f1': 0.023310023310023312, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.21468926553672316, 'recall': 0.2140845070422535, 'f1': 0.21438645980253881, 'number': 1065} | 0.1260 | 0.1244 | 0.1252 | 0.3753 | | 1.4429 | 2.0 | 20 | 1.2246 | {'precision': 0.2103861517976032, 'recall': 0.19530284301606923, 'f1': 0.20256410256410257, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4474885844748858, 'recall': 0.5521126760563381, 'f1': 0.4943253467843632, 'number': 1065} | 0.3613 | 0.3743 | 0.3677 | 0.5866 | | 1.0606 | 3.0 | 30 | 0.9253 | {'precision': 0.5022075055187638, 'recall': 0.5624227441285538, 'f1': 0.5306122448979591, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.6054006968641115, 'recall': 0.6525821596244131, 'f1': 0.6281066425666515, 'number': 1065} | 0.5518 | 0.5770 | 0.5641 | 0.7066 | | 0.8153 | 4.0 | 40 | 0.7559 | {'precision': 0.6192893401015228, 'recall': 0.754017305315204, 'f1': 0.6800445930880714, 'number': 809} | {'precision': 0.21153846153846154, 'recall': 0.09243697478991597, 'f1': 0.1286549707602339, 'number': 119} | {'precision': 0.6809480401093893, 'recall': 0.7014084507042253, 'f1': 0.6910268270120259, 'number': 1065} | 0.6410 | 0.6864 | 0.6630 | 0.7565 | | 0.6686 | 5.0 | 50 | 0.6983 | {'precision': 0.6512378902045209, 'recall': 0.7478368355995055, 'f1': 0.6962025316455697, 'number': 809} | {'precision': 0.25301204819277107, 'recall': 0.17647058823529413, 'f1': 0.20792079207920794, 'number': 119} | {'precision': 0.6876075731497419, 'recall': 0.7502347417840376, 'f1': 0.7175572519083969, 'number': 1065} | 0.6555 | 0.7150 | 0.6839 | 0.7797 | | 0.5578 | 6.0 | 60 | 0.6618 | {'precision': 0.6344969199178645, 'recall': 0.7639060568603214, 'f1': 0.6932136848008974, 'number': 809} | {'precision': 0.27586206896551724, 'recall': 0.20168067226890757, 'f1': 0.23300970873786409, 'number': 119} | {'precision': 0.6968724939855654, 'recall': 0.815962441314554, 'f1': 0.7517301038062284, 'number': 1065} | 0.6547 | 0.7582 | 0.7026 | 0.7895 | | 0.4916 | 7.0 | 70 | 0.6501 | {'precision': 0.6787234042553192, 'recall': 0.788627935723115, 'f1': 0.729559748427673, 'number': 809} | {'precision': 0.2523364485981308, 'recall': 0.226890756302521, 'f1': 0.23893805309734512, 'number': 119} | {'precision': 0.7281964436917866, 'recall': 0.8075117370892019, 'f1': 0.7658058771148708, 'number': 1065} | 0.6845 | 0.7652 | 0.7226 | 0.7975 | | 0.4501 | 8.0 | 80 | 0.6401 | {'precision': 0.6938110749185668, 'recall': 0.7898640296662547, 'f1': 0.738728323699422, 'number': 809} | {'precision': 0.26126126126126126, 'recall': 0.24369747899159663, 'f1': 0.25217391304347825, 'number': 119} | {'precision': 0.7434154630416313, 'recall': 0.8215962441314554, 'f1': 0.7805530776092775, 'number': 1065} | 0.6985 | 0.7742 | 0.7344 | 0.8066 | | 0.3986 | 9.0 | 90 | 0.6403 | {'precision': 0.7054945054945055, 'recall': 0.7935723114956736, 'f1': 0.7469458987783596, 'number': 809} | {'precision': 0.2537313432835821, 'recall': 0.2857142857142857, 'f1': 0.26877470355731226, 'number': 119} | {'precision': 0.7491496598639455, 'recall': 0.8272300469483568, 'f1': 0.786256135653726, 'number': 1065} | 0.7014 | 0.7812 | 0.7391 | 0.8069 | | 0.3621 | 10.0 | 100 | 0.6501 | {'precision': 0.7071038251366121, 'recall': 0.799752781211372, 'f1': 0.7505800464037122, 'number': 809} | {'precision': 0.29245283018867924, 'recall': 0.2605042016806723, 'f1': 0.27555555555555555, 'number': 119} | {'precision': 0.7715289982425307, 'recall': 0.8244131455399061, 'f1': 0.7970948706309579, 'number': 1065} | 0.7207 | 0.7807 | 0.7495 | 0.8085 | | 0.328 | 11.0 | 110 | 0.6625 | {'precision': 0.707742639040349, 'recall': 0.8022249690976514, 'f1': 0.7520278099652375, 'number': 809} | {'precision': 0.28688524590163933, 'recall': 0.29411764705882354, 'f1': 0.2904564315352697, 'number': 119} | {'precision': 0.7820738137082601, 'recall': 0.8356807511737089, 'f1': 0.8079891057648662, 'number': 1065} | 0.7230 | 0.7898 | 0.7549 | 0.8075 | | 0.3134 | 12.0 | 120 | 0.6655 | {'precision': 0.711038961038961, 'recall': 0.8121137206427689, 'f1': 0.7582227351413734, 'number': 809} | {'precision': 0.3135593220338983, 'recall': 0.31092436974789917, 'f1': 0.31223628691983124, 'number': 119} | {'precision': 0.7838078291814946, 'recall': 0.8272300469483568, 'f1': 0.8049337597076289, 'number': 1065} | 0.7271 | 0.7903 | 0.7574 | 0.8089 | | 0.2962 | 13.0 | 130 | 0.6583 | {'precision': 0.7161716171617162, 'recall': 0.8046971569839307, 'f1': 0.7578579743888243, 'number': 809} | {'precision': 0.3064516129032258, 'recall': 0.31932773109243695, 'f1': 0.31275720164609055, 'number': 119} | {'precision': 0.7808098591549296, 'recall': 0.8328638497652582, 'f1': 0.8059972739663789, 'number': 1065} | 0.7266 | 0.7908 | 0.7573 | 0.8089 | | 0.2823 | 14.0 | 140 | 0.6638 | {'precision': 0.7167755991285403, 'recall': 0.8133498145859085, 'f1': 0.7620150550086855, 'number': 809} | {'precision': 0.3135593220338983, 'recall': 0.31092436974789917, 'f1': 0.31223628691983124, 'number': 119} | {'precision': 0.7834960070984915, 'recall': 0.8291079812206573, 'f1': 0.8056569343065694, 'number': 1065} | 0.7295 | 0.7918 | 0.7594 | 0.8102 | | 0.2796 | 15.0 | 150 | 0.6659 | {'precision': 0.7130434782608696, 'recall': 0.8108776266996292, 'f1': 0.7588201272411799, 'number': 809} | {'precision': 0.30578512396694213, 'recall': 0.31092436974789917, 'f1': 0.30833333333333335, 'number': 119} | {'precision': 0.7858407079646018, 'recall': 0.8338028169014085, 'f1': 0.8091116173120729, 'number': 1065} | 0.7282 | 0.7933 | 0.7594 | 0.8113 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
nishshekh/distilbert-base-uncased-finetuned-emotion
nishshekh
2023-07-08T05:11:40Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-08T03:31:12Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.927 - name: F1 type: f1 value: 0.9271664736493986 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. The model is trained in Chapter 2: Text Classification in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/02_classification.ipynb). It achieves the following results on the evaluation set: - Loss: 0.2192 - Accuracy: 0.927 - F1: 0.9272 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8569 | 1.0 | 250 | 0.3386 | 0.894 | 0.8888 | | 0.2639 | 2.0 | 500 | 0.2192 | 0.927 | 0.9272 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu102 - Datasets 1.13.0 - Tokenizers 0.10.3
abdoeid/mT5_multilingual_XLSum-finetuned
abdoeid
2023-07-08T04:39:11Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-06T02:03:07Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
Bugsys0302/Nanashi-Mumei-LoRA
Bugsys0302
2023-07-08T04:08:05Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-08T04:03:55Z
--- license: creativeml-openrail-m ---
morokosi/dqn-SpaceInvadersNoFrameskip-v4
morokosi
2023-07-08T03:59:36Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T03:57:10Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 546.00 +/- 168.07 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga morokosi -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga morokosi -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga morokosi ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Shridipta-06/a2c-PandaReachDense-v24
Shridipta-06
2023-07-08T03:54:31Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T03:51:35Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.22 +/- 0.44 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
TigerResearch/tigerbot-7b-base-v1
TigerResearch
2023-07-08T03:50:35Z
16
11
transformers
[ "transformers", "pytorch", "bloom", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-31T15:06:17Z
--- license: apache-2.0 --- <div style="width: 100%;"> <img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" style="width: 20%; display: block; margin: auto;"> </div> <p align="center"> <font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font> </p> <p align="center"> 🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a> </p> ## Github https://github.com/TigerResearch/TigerBot ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TigerResearch/tigerbot-7b-base-v1") model = AutoModelForCausalLM.from_pretrained("TigerResearch/tigerbot-7b-base-v1") ```
TigerResearch/tigerbot-7b-sft-v1
TigerResearch
2023-07-08T03:48:41Z
203
13
transformers
[ "transformers", "pytorch", "bloom", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-31T09:16:07Z
--- license: apache-2.0 --- <div style="width: 100%;"> <img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" style="width: 20%; display: block; margin: auto;"> </div> <p align="center"> <font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font> </p> <p align="center"> 🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a> </p> ## Github https://github.com/TigerResearch/TigerBot ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM from accelerate import infer_auto_device_map, dispatch_model from accelerate.utils import get_balanced_memory tokenizer = AutoTokenizer.from_pretrained("TigerResearch/tigerbot-7b-sft-v1") model = AutoModelForCausalLM.from_pretrained("TigerResearch/tigerbot-7b-sft-v1") max_memory = get_balanced_memory(model) device_map = infer_auto_device_map(model, max_memory=max_memory, no_split_module_classes=["BloomBlock"]) model = dispatch_model(model, device_map=device_map, offload_buffers=True) device = torch.cuda.current_device() tok_ins = "\n\n### Instruction:\n" tok_res = "\n\n### Response:\n" prompt_input = tok_ins + "{instruction}" + tok_res input_text = "What is the next number after this list: [1, 2, 3, 5, 8, 13, 21]" input_text = prompt_input.format_map({'instruction': input_text}) max_input_length = 512 max_generate_length = 1024 generation_kwargs = { "top_p": 0.95, "temperature": 0.8, "max_length": max_generate_length, "eos_token_id": tokenizer.eos_token_id, "pad_token_id": tokenizer.pad_token_id, "early_stopping": True, "no_repeat_ngram_size": 4, } inputs = tokenizer(input_text, return_tensors='pt', truncation=True, max_length=max_input_length) inputs = {k: v.to(device) for k, v in inputs.items()} output = model.generate(**inputs, **generation_kwargs) answer = '' for tok_id in output[0][inputs['input_ids'].shape[1]:]: if tok_id != tokenizer.eos_token_id: answer += tokenizer.decode(tok_id) print(answer) ```
Bugsys0302/goblin-girl
Bugsys0302
2023-07-08T03:44:50Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-08T03:43:19Z
--- license: creativeml-openrail-m ---
Bugsys0302/headback-lora
Bugsys0302
2023-07-08T03:33:13Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-08T03:09:39Z
--- license: creativeml-openrail-m ---
Shridipta-06/a2c-PandaReachDense-v23
Shridipta-06
2023-07-08T03:19:28Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T03:16:44Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -4.41 +/- 1.16 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
aroot/eng-guj-r3
aroot
2023-07-08T02:14:47Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T01:56:15Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-r3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-r3 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2820 - Bleu: 2.8377 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-mya-r2
aroot
2023-07-08T02:12:23Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T01:53:54Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-r2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-r2 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8896 - Bleu: 4.0513 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-mya-r1
aroot
2023-07-08T02:09:47Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T01:50:28Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-r1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-r1 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8954 - Bleu: 3.9641 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
siemr/LunarLander
siemr
2023-07-08T02:08:29Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T04:53:01Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 291.72 +/- 16.72 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
aroot/eng-guj-r1
aroot
2023-07-08T01:31:32Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T01:10:20Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-r1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-r1 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2774 - Bleu: 2.7054 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
saintzeno/reinforce-Pixelcopter-PLE-v0
saintzeno
2023-07-08T01:28:52Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T05:51:14Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 42.70 +/- 25.08 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
aroot/eng-mya-simcse_random_usrl
aroot
2023-07-08T01:10:25Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T00:49:19Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_random_usrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_random_usrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8870 - Bleu: 4.2308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-mya-simcse_central_usrl
aroot
2023-07-08T01:07:00Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T00:45:39Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_central_usrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_central_usrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8843 - Bleu: 4.1587 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-fra-r1
aroot
2023-07-08T00:56:31Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T00:37:59Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-r1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-r1 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1512 - Bleu: 31.7456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
LanzerPotaz/Dumb_Huggy_3.0
LanzerPotaz
2023-07-08T00:45:06Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-08T00:45:02Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: LanzerPotaz/Dumb_Huggy_3.0 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
adalgu/qlora-koalpaca-polyglot-12.8b-50step
adalgu
2023-07-08T00:34:23Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-08T00:34:17Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
aroot/eng-guj-simcse_random_usrl
aroot
2023-07-08T00:29:59Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T00:08:29Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_random_usrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_random_usrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2803 - Bleu: 2.8935 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-guj-simcse_central_usrl
aroot
2023-07-08T00:25:52Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-08T00:04:17Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_central_usrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_central_usrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2765 - Bleu: 2.8046 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
DIOS9/ppo-LunarLander-v2
DIOS9
2023-07-08T00:18:15Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-08T00:17:50Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 258.18 +/- 21.32 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
voidcenter/distilgpt2-finetuned-wikitext2
voidcenter
2023-07-07T23:55:51Z
202
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T23:15:18Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7602 | 1.0 | 2334 | 3.6669 | | 3.653 | 2.0 | 4668 | 3.6472 | | 3.6006 | 3.0 | 7002 | 3.6421 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
aroot/eng-fra-simcse_random_usrl
aroot
2023-07-07T23:54:49Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T23:36:12Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_random_usrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_random_usrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1454 - Bleu: 31.8699 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
TomyAI/anipan
TomyAI
2023-07-07T23:49:40Z
0
3
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-07T22:24:58Z
--- license: creativeml-openrail-m --- trigger word:(キャラ名) print panty キャラ名の打率は低めですが、何かキャラクターがプリントされたパンツが描かれます。 サイズの関係でどうしても顔が崩れるのでinpaintで調整してください。
aroot/eng-guj-simcse_central_ssrl
aroot
2023-07-07T23:42:40Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T23:24:29Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_central_ssrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_central_ssrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2825 - Bleu: 2.5968 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-guj-simcse_random_ssrl
aroot
2023-07-07T23:38:55Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T23:20:28Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_random_ssrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_random_ssrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2808 - Bleu: 2.6271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
mpetrikov/ppo-SnowballTarget
mpetrikov
2023-07-07T23:35:02Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-07-07T23:34:59Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: mpetrikov/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
LarryAIDraw/Ruby
LarryAIDraw
2023-07-07T23:34:28Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-07T23:32:32Z
--- license: creativeml-openrail-m --- https://civitai.com/models/102477/hoshino-ruby-or-oshi-no-ko
LarryAIDraw/Raiden_Mei-Aqueous_Springtide_final
LarryAIDraw
2023-07-07T23:31:21Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-07T23:29:18Z
--- license: creativeml-openrail-m --- https://civitai.com/models/83603/raiden-mei-herrscher-of-thunder-aqueous-springtide-honkai-3rd
zhoubin/Bloom
zhoubin
2023-07-07T23:30:45Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2023-07-07T23:30:45Z
--- license: bigscience-bloom-rail-1.0 ---
aroot/eng-fra-simcse_random_ssrl
aroot
2023-07-07T23:06:31Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T22:51:26Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_random_ssrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_random_ssrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1462 - Bleu: 31.7089 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
ytung/q-FrozenLake-v1-4x4-noSlippery
ytung
2023-07-07T23:02:23Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-06-20T22:52:20Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ytung/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Erfan2001/Final_PersianTextClassificationModel
Erfan2001
2023-07-07T22:58:50Z
65
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T22:48:52Z
--- tags: - generated_from_keras_callback model-index: - name: my-awesome-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # my-awesome-model This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Tokenizers 0.13.3
algiraldohe/lm-ner-linkedin-skills-recognition
algiraldohe
2023-07-07T22:51:06Z
353
21
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-07T21:42:41Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: lm-ner-linkedin-skills-recognition results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lm-ner-linkedin-skills-recognition This model is a fine-tuned version of [algiraldohe/distilbert-base-uncased-linkedin-domain-adaptation](https://huggingface.co/algiraldohe/distilbert-base-uncased-linkedin-domain-adaptation) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0307 - Precision: 0.9119 - Recall: 0.9312 - F1: 0.9214 - Accuracy: 0.9912 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1301 | 1.0 | 729 | 0.0468 | 0.8786 | 0.8715 | 0.8750 | 0.9863 | | 0.0432 | 2.0 | 1458 | 0.0345 | 0.8994 | 0.9219 | 0.9105 | 0.9900 | | 0.0332 | 3.0 | 2187 | 0.0307 | 0.9119 | 0.9312 | 0.9214 | 0.9912 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
HaziqRazali/ppo-LunarLander-v2
HaziqRazali
2023-07-07T22:47:13Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T22:46:53Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: ppo results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 242.01 +/- 20.00 name: mean_reward verified: false --- # **ppo** Agent playing **LunarLander-v2** This is a trained model of a **ppo** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dracero/dqn-LunarLander-v2
dracero
2023-07-07T22:36:45Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T22:36:10Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -71.70 +/- 16.99 name: mean_reward verified: false --- # **DQN** Agent playing **LunarLander-v2** This is a trained model of a **DQN** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
varcoder/segformer-DeepCrack
varcoder
2023-07-07T22:26:23Z
1
0
transformers
[ "transformers", "pytorch", "tensorboard", "segformer", "generated_from_trainer", "license:other", "endpoints_compatible", "region:us" ]
null
2023-07-06T17:28:43Z
--- license: other tags: - generated_from_trainer model-index: - name: segformer-b0-DeepCrack results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-DeepCrack This model is a fine-tuned version of [nvidia/mit-b4](https://huggingface.co/nvidia/mit-b4) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0017 - Mean Iou: 0.0 - Mean Accuracy: 0.0 - Overall Accuracy: 0.0 - Accuracy Background: nan - Accuracy Cracked: 0.0 - Iou Background: 0.0 - Iou Cracked: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Cracked | Iou Background | Iou Cracked | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:----------------:|:--------------:|:-----------:| | 0.2923 | 0.13 | 20 | 0.2120 | 0.0200 | 0.0399 | 0.0399 | nan | 0.0399 | 0.0 | 0.0399 | | 0.0959 | 0.27 | 40 | 0.0702 | 0.0661 | 0.1321 | 0.1321 | nan | 0.1321 | 0.0 | 0.1321 | | 0.0316 | 0.4 | 60 | 0.0378 | 0.0193 | 0.0387 | 0.0387 | nan | 0.0387 | 0.0 | 0.0387 | | 0.0184 | 0.53 | 80 | 0.0165 | 0.0306 | 0.0612 | 0.0612 | nan | 0.0612 | 0.0 | 0.0612 | | 0.0119 | 0.67 | 100 | 0.0108 | 0.0277 | 0.0554 | 0.0554 | nan | 0.0554 | 0.0 | 0.0554 | | 0.0083 | 0.8 | 120 | 0.0085 | 0.0381 | 0.0761 | 0.0761 | nan | 0.0761 | 0.0 | 0.0761 | | 0.0085 | 0.93 | 140 | 0.0118 | 0.0112 | 0.0223 | 0.0223 | nan | 0.0223 | 0.0 | 0.0223 | | 0.0072 | 1.07 | 160 | 0.0063 | 0.0289 | 0.0578 | 0.0578 | nan | 0.0578 | 0.0 | 0.0578 | | 0.0072 | 1.2 | 180 | 0.0057 | 0.0004 | 0.0009 | 0.0009 | nan | 0.0009 | 0.0 | 0.0009 | | 0.0038 | 1.33 | 200 | 0.0037 | 0.0004 | 0.0009 | 0.0009 | nan | 0.0009 | 0.0 | 0.0009 | | 0.0038 | 1.47 | 220 | 0.0035 | 0.0024 | 0.0048 | 0.0048 | nan | 0.0048 | 0.0 | 0.0048 | | 0.0037 | 1.6 | 240 | 0.0033 | 0.0035 | 0.0071 | 0.0071 | nan | 0.0071 | 0.0 | 0.0071 | | 0.004 | 1.73 | 260 | 0.0029 | 0.0000 | 0.0000 | 0.0000 | nan | 0.0000 | 0.0 | 0.0000 | | 0.0027 | 1.87 | 280 | 0.0027 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | | 0.0029 | 2.0 | 300 | 0.0025 | 0.0000 | 0.0000 | 0.0000 | nan | 0.0000 | 0.0 | 0.0000 | | 0.0032 | 2.13 | 320 | 0.0026 | 0.0000 | 0.0000 | 0.0000 | nan | 0.0000 | 0.0 | 0.0000 | | 0.0024 | 2.27 | 340 | 0.0023 | 0.0000 | 0.0000 | 0.0000 | nan | 0.0000 | 0.0 | 0.0000 | | 0.0021 | 2.4 | 360 | 0.0024 | 0.0000 | 0.0000 | 0.0000 | nan | 0.0000 | 0.0 | 0.0000 | | 0.0021 | 2.53 | 380 | 0.0021 | 0.0000 | 0.0000 | 0.0000 | nan | 0.0000 | 0.0 | 0.0000 | | 0.0026 | 2.67 | 400 | 0.0020 | 0.0000 | 0.0001 | 0.0001 | nan | 0.0001 | 0.0 | 0.0001 | | 0.002 | 2.8 | 420 | 0.0018 | 0.0000 | 0.0000 | 0.0000 | nan | 0.0000 | 0.0 | 0.0000 | | 0.0019 | 2.93 | 440 | 0.0020 | 0.0000 | 0.0000 | 0.0000 | nan | 0.0000 | 0.0 | 0.0000 | | 0.0023 | 3.07 | 460 | 0.0020 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | | 0.002 | 3.2 | 480 | 0.0019 | 0.0000 | 0.0000 | 0.0000 | nan | 0.0000 | 0.0 | 0.0000 | | 0.0018 | 3.33 | 500 | 0.0019 | 0.0000 | 0.0001 | 0.0001 | nan | 0.0001 | 0.0 | 0.0001 | | 0.0018 | 3.47 | 520 | 0.0018 | 0.0000 | 0.0001 | 0.0001 | nan | 0.0001 | 0.0 | 0.0001 | | 0.0021 | 3.6 | 540 | 0.0017 | 0.0000 | 0.0000 | 0.0000 | nan | 0.0000 | 0.0 | 0.0000 | | 0.0018 | 3.73 | 560 | 0.0017 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | | 0.0017 | 3.87 | 580 | 0.0016 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | | 0.002 | 4.0 | 600 | 0.0017 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
cagito/tez
cagito
2023-07-07T22:20:40Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-07-07T22:19:50Z
--- license: openrail --- pip install transformersfrom transformers import pipeline paraphrase_generator = pipeline('text2text-generation', model='gpt2') original_text = "Intihal içeren bir cümle." paraphrased_text = paraphrase_generator(original_text, max_length=50, num_return_sequences=1) print(paraphrased_text[0]['generated_text'])
Khushnur/t5-base-end2end-questions-generation_eli_squad_single_exp
Khushnur
2023-07-07T22:17:13Z
164
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-07T20:33:49Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-base-end2end-questions-generation_eli_squad_single_exp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-end2end-questions-generation_eli_squad_single_exp This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4297 | 0.25 | 100 | 2.7250 | | 2.2459 | 0.5 | 200 | 2.7337 | | 2.2066 | 0.74 | 300 | 2.7301 | | 2.1867 | 0.99 | 400 | 2.7186 | | 2.1046 | 1.24 | 500 | 2.7268 | | 2.1003 | 1.49 | 600 | 2.7269 | | 2.0799 | 1.74 | 700 | 2.7222 | | 2.0852 | 1.99 | 800 | 2.7238 | | 2.0323 | 2.23 | 900 | 2.7258 | | 2.0297 | 2.48 | 1000 | 2.7252 | | 2.0451 | 2.73 | 1100 | 2.7230 | | 2.0208 | 2.98 | 1200 | 2.7241 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
neilsun2009/amz_movie_tv_distilgpt2_50k_random
neilsun2009
2023-07-07T22:14:44Z
4
0
peft
[ "peft", "gpt-2", "text-generation", "en", "region:us" ]
text-generation
2023-07-07T22:13:31Z
--- language: - en metrics: - perplexity library_name: peft pipeline_tag: text-generation tags: - gpt-2 ---
jordyvl/dit-small_tobacco3482_kd_MSE
jordyvl
2023-07-07T22:14:26Z
161
0
transformers
[ "transformers", "pytorch", "beit", "image-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-07T22:00:56Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: dit-small_tobacco3482_kd_MSE results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dit-small_tobacco3482_kd_MSE This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.7275 - Accuracy: 0.21 - Brier Loss: 0.8834 - Nll: 6.7677 - F1 Micro: 0.2100 - F1 Macro: 0.1146 - Ece: 0.2647 - Aurc: 0.7666 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:| | No log | 0.96 | 3 | 7.1014 | 0.06 | 0.9055 | 7.9056 | 0.06 | 0.0114 | 0.1732 | 0.9050 | | No log | 1.96 | 6 | 6.9659 | 0.125 | 0.8970 | 10.1253 | 0.125 | 0.0631 | 0.2010 | 0.8465 | | No log | 2.96 | 9 | 6.8528 | 0.075 | 0.8954 | 7.0315 | 0.075 | 0.0258 | 0.1912 | 0.8871 | | No log | 3.96 | 12 | 6.8522 | 0.205 | 0.8955 | 7.0990 | 0.205 | 0.0776 | 0.2426 | 0.7588 | | No log | 4.96 | 15 | 6.8465 | 0.19 | 0.8959 | 7.1340 | 0.19 | 0.0627 | 0.2308 | 0.7536 | | No log | 5.96 | 18 | 6.8246 | 0.205 | 0.8937 | 7.1101 | 0.205 | 0.0867 | 0.2410 | 0.7354 | | No log | 6.96 | 21 | 6.8054 | 0.085 | 0.8918 | 7.0215 | 0.085 | 0.0435 | 0.1847 | 0.8289 | | No log | 7.96 | 24 | 6.8025 | 0.22 | 0.8879 | 6.8272 | 0.22 | 0.0967 | 0.2487 | 0.7438 | | No log | 8.96 | 27 | 6.8045 | 0.21 | 0.8871 | 6.3740 | 0.2100 | 0.0992 | 0.2412 | 0.7634 | | No log | 9.96 | 30 | 6.8013 | 0.22 | 0.8869 | 6.9538 | 0.22 | 0.1016 | 0.2495 | 0.7633 | | No log | 10.96 | 33 | 6.7920 | 0.215 | 0.8865 | 6.9670 | 0.2150 | 0.0968 | 0.2549 | 0.7577 | | No log | 11.96 | 36 | 6.7817 | 0.22 | 0.8867 | 6.9953 | 0.22 | 0.1004 | 0.2455 | 0.7437 | | No log | 12.96 | 39 | 6.7729 | 0.17 | 0.8884 | 6.9738 | 0.17 | 0.0891 | 0.2277 | 0.7865 | | No log | 13.96 | 42 | 6.7632 | 0.2 | 0.8873 | 6.9622 | 0.2000 | 0.0998 | 0.2393 | 0.7413 | | No log | 14.96 | 45 | 6.7548 | 0.215 | 0.8860 | 6.9576 | 0.2150 | 0.1010 | 0.2635 | 0.7189 | | No log | 15.96 | 48 | 6.7489 | 0.22 | 0.8857 | 6.8386 | 0.22 | 0.1024 | 0.2665 | 0.7098 | | No log | 16.96 | 51 | 6.7457 | 0.23 | 0.8855 | 6.8730 | 0.23 | 0.1129 | 0.2506 | 0.7217 | | No log | 17.96 | 54 | 6.7455 | 0.215 | 0.8864 | 6.8688 | 0.2150 | 0.1058 | 0.2576 | 0.7528 | | No log | 18.96 | 57 | 6.7424 | 0.16 | 0.8861 | 6.8631 | 0.16 | 0.0843 | 0.2281 | 0.8036 | | No log | 19.96 | 60 | 6.7380 | 0.155 | 0.8850 | 6.8443 | 0.155 | 0.0871 | 0.2315 | 0.7937 | | No log | 20.96 | 63 | 6.7348 | 0.195 | 0.8841 | 6.7769 | 0.195 | 0.0949 | 0.2501 | 0.7799 | | No log | 21.96 | 66 | 6.7317 | 0.175 | 0.8838 | 6.7692 | 0.175 | 0.1025 | 0.2421 | 0.7797 | | No log | 22.96 | 69 | 6.7293 | 0.175 | 0.8836 | 6.7682 | 0.175 | 0.1012 | 0.2452 | 0.7799 | | No log | 23.96 | 72 | 6.7281 | 0.205 | 0.8834 | 6.7672 | 0.205 | 0.1132 | 0.2566 | 0.7679 | | No log | 24.96 | 75 | 6.7275 | 0.21 | 0.8834 | 6.7677 | 0.2100 | 0.1146 | 0.2647 | 0.7666 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
jordyvl/dit-tiny_tobacco3482_kd_MSE
jordyvl
2023-07-07T22:00:12Z
164
0
transformers
[ "transformers", "pytorch", "beit", "image-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-07T21:48:19Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: dit-tiny_tobacco3482_kd_MSE results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dit-tiny_tobacco3482_kd_MSE This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.8328 - Accuracy: 0.19 - Brier Loss: 0.8942 - Nll: 7.0296 - F1 Micro: 0.19 - F1 Macro: 0.0703 - Ece: 0.2429 - Aurc: 0.8146 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:| | No log | 0.96 | 3 | 7.1188 | 0.145 | 0.9003 | 10.1627 | 0.145 | 0.0253 | 0.2218 | 0.8463 | | No log | 1.96 | 6 | 7.0608 | 0.145 | 0.8969 | 9.8809 | 0.145 | 0.0253 | 0.2197 | 0.8454 | | No log | 2.96 | 9 | 6.9777 | 0.145 | 0.8929 | 8.9712 | 0.145 | 0.0442 | 0.2065 | 0.7921 | | No log | 3.96 | 12 | 6.9144 | 0.17 | 0.8908 | 4.9924 | 0.17 | 0.0413 | 0.2325 | 0.7807 | | No log | 4.96 | 15 | 6.8797 | 0.145 | 0.8912 | 6.8983 | 0.145 | 0.0399 | 0.2089 | 0.7932 | | No log | 5.96 | 18 | 6.8636 | 0.085 | 0.8926 | 6.9917 | 0.085 | 0.0299 | 0.1822 | 0.8755 | | No log | 6.96 | 21 | 6.8545 | 0.075 | 0.8946 | 7.0604 | 0.075 | 0.0307 | 0.1849 | 0.8758 | | No log | 7.96 | 24 | 6.8486 | 0.06 | 0.8958 | 7.1035 | 0.06 | 0.0230 | 0.1801 | 0.8891 | | No log | 8.96 | 27 | 6.8455 | 0.165 | 0.8967 | 7.1315 | 0.165 | 0.0604 | 0.2414 | 0.8438 | | No log | 9.96 | 30 | 6.8450 | 0.185 | 0.8973 | 7.1546 | 0.185 | 0.0468 | 0.2477 | 0.8436 | | No log | 10.96 | 33 | 6.8438 | 0.18 | 0.8969 | 7.1569 | 0.18 | 0.0308 | 0.2406 | 0.8504 | | No log | 11.96 | 36 | 6.8414 | 0.18 | 0.8962 | 7.1492 | 0.18 | 0.0306 | 0.2510 | 0.8501 | | No log | 12.96 | 39 | 6.8390 | 0.18 | 0.8958 | 7.1455 | 0.18 | 0.0306 | 0.2374 | 0.8494 | | No log | 13.96 | 42 | 6.8365 | 0.18 | 0.8950 | 7.0793 | 0.18 | 0.0306 | 0.2436 | 0.8488 | | No log | 14.96 | 45 | 6.8349 | 0.18 | 0.8944 | 7.0591 | 0.18 | 0.0306 | 0.2369 | 0.8486 | | No log | 15.96 | 48 | 6.8338 | 0.18 | 0.8942 | 7.0493 | 0.18 | 0.0306 | 0.2396 | 0.8482 | | No log | 16.96 | 51 | 6.8335 | 0.18 | 0.8940 | 7.0429 | 0.18 | 0.0309 | 0.2390 | 0.8486 | | No log | 17.96 | 54 | 6.8341 | 0.18 | 0.8943 | 7.0410 | 0.18 | 0.0314 | 0.2351 | 0.8514 | | No log | 18.96 | 57 | 6.8338 | 0.19 | 0.8943 | 7.0391 | 0.19 | 0.0495 | 0.2480 | 0.8471 | | No log | 19.96 | 60 | 6.8335 | 0.205 | 0.8943 | 7.0342 | 0.205 | 0.0722 | 0.2562 | 0.8204 | | No log | 20.96 | 63 | 6.8334 | 0.2 | 0.8942 | 7.0308 | 0.2000 | 0.0683 | 0.2541 | 0.8199 | | No log | 21.96 | 66 | 6.8332 | 0.195 | 0.8942 | 7.0296 | 0.195 | 0.0714 | 0.2511 | 0.8099 | | No log | 22.96 | 69 | 6.8330 | 0.195 | 0.8942 | 7.0297 | 0.195 | 0.0717 | 0.2572 | 0.8123 | | No log | 23.96 | 72 | 6.8329 | 0.19 | 0.8942 | 7.0294 | 0.19 | 0.0703 | 0.2459 | 0.8148 | | No log | 24.96 | 75 | 6.8328 | 0.19 | 0.8942 | 7.0296 | 0.19 | 0.0703 | 0.2429 | 0.8146 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
amitvb/distilgpt2-finetuned-wikitext2
amitvb
2023-07-07T21:56:41Z
204
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T21:03:32Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7602 | 1.0 | 2334 | 3.6669 | | 3.653 | 2.0 | 4668 | 3.6472 | | 3.6006 | 3.0 | 7002 | 3.6421 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
andressrg/textual_inversion_meal_0_100
andressrg
2023-07-07T21:52:33Z
32
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-07T21:40:21Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - andressrg/textual_inversion_meal_0_100 These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
TheBloke/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2-GGML
TheBloke
2023-07-07T21:34:49Z
0
2
null
[ "license:other", "region:us" ]
null
2023-07-07T21:30:55Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # H2O's GM OASST1 Falcon 7B v2 GGML These files are GGML format model files for [H2O's GM OASST1 Falcon 7B v2](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2). These files will **not** work in llama.cpp, text-generation-webui or KoboldCpp. GGCC is a new format created in a new fork of llama.cpp that introduced this new Falcon GGML-based support: [cmp-nc/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp). Currently these files will also not work with code that previously supported Falcon, such as LoLLMs Web UI and ctransformers. But support should be added soon. ## Repositories available * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2) ## Prompt template: H2O ``` <|prompt|>prompt<|endoftext|><|answer|> ``` <!-- compatibility_ggml start --> ## Compatibility To build cmp-nct's fork of llama.cpp with Falcon support plus CUDA acceleration, please try the following steps: ``` git clone https://github.com/cmp-nct/ggllm.cpp cd ggllm.cpp rm -rf build && mkdir build && cd build && cmake -DGGML_CUBLAS=1 .. && cmake --build . --config Release ``` Compiling on Windows: developer cmp-nct notes: 'I personally compile it using VScode. When compiling with CUDA support using the Microsoft compiler it's essential to select the "Community edition build tools". Otherwise CUDA won't compile.' Once compiled you can then use `bin/falcon_main` just like you would use llama.cpp. For example: ``` bin/falcon_main -t 8 -ngl 100 -b 1 -m h2ogpt-gm-oasst1-en-2048-falcon-7b-v2.ggccv1.q4_0.bin -enc -p "write a story about llamas" ``` Parameter `-enc` should automatically use the right prompt template for the model, so you can just enter your desired prompt. You can specify `-ngl 100` regardles of your VRAM, as it will automatically detect how much VRAM is available to be used. Adjust `-t 8` (the number of CPU cores to use) according to what performs best on your system. Do not exceed the number of physical CPU cores you have. `-b 1` reduces batch size to 1. This slightly lowers prompt evaluation time, but frees up VRAM to load more of the model on to your GPU. If you find prompt evaluation too slow and have enough spare VRAM, you can remove this parameter. Please see https://github.com/cmp-nct/ggllm.cpp for further details and instructions. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | h2ogpt-gm-oasst1-en-2048-falcon-7b-v2.ggccv1.q4_0.bin | q4_0 | 4 | 4.06 GB| 6.56 GB | Original quant method, 4-bit. | | h2ogpt-gm-oasst1-en-2048-falcon-7b-v2.ggccv1.q4_1.bin | q4_1 | 4 | 4.51 GB| 7.01 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | h2ogpt-gm-oasst1-en-2048-falcon-7b-v2.ggccv1.q5_0.bin | q5_0 | 5 | 4.96 GB| 7.46 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | h2ogpt-gm-oasst1-en-2048-falcon-7b-v2.ggccv1.q5_1.bin | q5_1 | 5 | 5.42 GB| 7.92 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | h2ogpt-gm-oasst1-en-2048-falcon-7b-v2.ggccv1.q8_0.bin | q8_0 | 8 | 7.67 GB| 10.17 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: H2O's GM OASST1 Falcon 7B v2 # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) - Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate`, `torch` and `einops` libraries installed. ```bash pip install transformers==4.29.2 pip install accelerate==0.19.0 pip install torch==2.0.0 pip install einops==0.6.1 ``` ```python import torch from transformers import AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2", use_fast=False, padding_side="left", trust_remote_code=True, ) generate_text = pipeline( model="h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2", tokenizer=tokenizer, torch_dtype=torch.float16, trust_remote_code=True, use_fast=False, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2", use_fast=False, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2", torch_dtype=torch.float16, device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|endoftext|><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=False, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` RWForCausalLM( (transformer): RWModel( (word_embeddings): Embedding(65024, 4544) (h): ModuleList( (0-31): 32 x DecoderLayer( (input_layernorm): LayerNorm((4544,), eps=1e-05, elementwise_affine=True) (self_attention): Attention( (maybe_rotary): RotaryEmbedding() (query_key_value): Linear(in_features=4544, out_features=4672, bias=False) (dense): Linear(in_features=4544, out_features=4544, bias=False) (attention_dropout): Dropout(p=0.0, inplace=False) ) (mlp): MLP( (dense_h_to_4h): Linear(in_features=4544, out_features=18176, bias=False) (act): GELU(approximate='none') (dense_4h_to_h): Linear(in_features=18176, out_features=4544, bias=False) ) ) ) (ln_f): LayerNorm((4544,), eps=1e-05, elementwise_affine=True) ) (lm_head): Linear(in_features=4544, out_features=65024, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Model Validation Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). ```bash CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2 --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log ``` ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
Manab/donut-base-my_model_rapido_2_new_check_4
Manab
2023-07-07T21:29:12Z
45
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-07T21:22:11Z
--- license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-my_model_rapido_2_new_check_4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-my_model_rapido_2_new_check_4 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.1017 | 0.69 | 50 | 1.7221 | | 1.4162 | 1.39 | 100 | 0.8758 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
openlm-research/open_llama_7b_v2
openlm-research
2023-07-07T21:26:13Z
3,256
116
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:tiiuae/falcon-refinedweb", "dataset:bigcode/starcoderdata", "dataset:togethercomputer/RedPajama-Data-1T", "arxiv:2302.13971", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T08:23:04Z
--- license: apache-2.0 datasets: - tiiuae/falcon-refinedweb - bigcode/starcoderdata - togethercomputer/RedPajama-Data-1T library_name: transformers --- # OpenLLaMA: An Open Reproduction of LLaMA **TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations. In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details. ## Weights Release, License and Usage We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license. ### Loading the Weights with Hugging Face Transformers Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage. ```python import torch from transformers import LlamaTokenizer, LlamaForCausalLM ## v2 models model_path = 'openlm-research/open_llama_7b_v2' ## v1 models # model_path = 'openlm-research/open_llama_3b' # model_path = 'openlm-research/open_llama_7b' # model_path = 'openlm-research/open_llama_13b' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) prompt = 'Q: What is the largest animal?\nA:' input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=32 ) print(tokenizer.decode(generation_output[0])) ``` For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama). ### Evaluating with LM-Eval-Harness The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below: ```python tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained( pretrained if tokenizer is None else tokenizer, revision=revision + ("/" + subfolder if subfolder is not None else ""), use_fast=False ) ``` ### Loading the Weights with EasyLM For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. ## Dataset and Training The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA. We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model. ## Evaluation We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/). The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks. | **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B | | ---------------------- | -------- | -------- | --------- | -------------- | ------------ | ------------ | ------------- | | anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.34 | 0.33 | 0.33 | 0.33 | | anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 | | anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.39 | 0.35 | 0.38 | 0.40 | | arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.39 | 0.34 | 0.37 | 0.41 | | arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.41 | 0.37 | 0.38 | 0.44 | | arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.73 | 0.69 | 0.72 | 0.75 | | arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.70 | 0.65 | 0.68 | 0.70 | | boolq/acc | 0.66 | 0.75 | 0.71 | 0.72 | 0.68 | 0.71 | 0.75 | | hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.56 | 0.49 | 0.53 | 0.56 | | hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.75 | 0.67 | 0.72 | 0.76 | | openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.30 | 0.27 | 0.30 | 0.31 | | openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.41 | 0.40 | 0.40 | 0.43 | | piqa/acc | 0.75 | 0.78 | 0.79 | 0.79 | 0.75 | 0.76 | 0.77 | | piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.80 | 0.76 | 0.77 | 0.79 | | record/em | 0.88 | 0.91 | 0.92 | 0.89 | 0.88 | 0.89 | 0.91 | | record/f1 | 0.89 | 0.91 | 0.92 | 0.89 | 0.89 | 0.90 | 0.91 | | rte/acc | 0.54 | 0.56 | 0.69 | 0.57 | 0.58 | 0.60 | 0.64 | | truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.23 | 0.22 | 0.23 | 0.25 | | truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.38 | | wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 | | winogrande/acc | 0.64 | 0.68 | 0.70 | 0.66 | 0.62 | 0.67 | 0.70 | | Average | 0.52 | 0.55 | 0.57 | 0.56 | 0.53 | 0.55 | 0.57 | We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set. ## Contact We would love to get feedback from the community. If you have any questions, please open an issue or contact us. OpenLLaMA is developed by: [Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research. *Equal Contribution ## Acknowledgment We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback. The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support. ## Reference If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX: ``` @software{openlm2023openllama, author = {Geng, Xinyang and Liu, Hao}, title = {OpenLLaMA: An Open Reproduction of LLaMA}, month = May, year = 2023, url = {https://github.com/openlm-research/open_llama} } ``` ``` @software{together2023redpajama, author = {Together Computer}, title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset}, month = April, year = 2023, url = {https://github.com/togethercomputer/RedPajama-Data} } ``` ``` @article{touvron2023llama, title={Llama: Open and efficient foundation language models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ```
spacemanidol/flan-t5-base-5-5-xsum
spacemanidol
2023-07-07T21:25:32Z
108
0
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-02-27T15:39:27Z
--- tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: base-5-5 results: - task: name: Summarization type: summarization dataset: name: xsum type: xsum config: default split: validation args: default metrics: - name: Rouge1 type: rouge value: 38.7969 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # base-5-5 This model is a fine-tuned version of [x/base-5-5/](https://huggingface.co/x/base-5-5/) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 1.7414 - Rouge1: 38.7969 - Rouge2: 15.7213 - Rougel: 31.0769 - Rougelsum: 31.0667 - Gen Len: 26.9223 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.10.0 - Tokenizers 0.13.2
mwz/UrduParaphraseBERT
mwz
2023-07-07T21:21:02Z
188
4
transformers
[ "transformers", "pytorch", "safetensors", "encoder-decoder", "text2text-generation", "paraphrase ", "ur", "dataset:mwz/ur_para", "license:mit", "autotrain_compatible", "region:us" ]
text2text-generation
2023-06-08T18:15:47Z
--- inference: false license: mit datasets: - mwz/ur_para language: - ur tags: - 'paraphrase ' --- # Urdu Paraphrasing Model This repository contains a trained Urdu paraphrasing model based on the BERT-based encoder-decoder architecture. The model has been fine-tuned on the Urdu Paraphrase Dataset and can generate paraphrases for given input sentences in Urdu. ## Model Description The model is built using the Hugging Face Transformers library and is trained on the BERT-base-uncased model. It employs an encoder-decoder architecture where the BERT model serves as the encoder, and another BERT model is used as the decoder. The model is trained to generate paraphrases by reconstructing the input sentences. ## Usage To use the trained model for paraphrasing Urdu sentences, you can follow the steps below: 1. Install the required dependencies by running the following command: 2. Load the trained model using the Hugging Face Transformers library: ```python from transformers import EncoderDecoderModel, BertTokenizer # Load the model and tokenizer model = EncoderDecoderModel.from_pretrained("mwz/UrduParaphraseBERT") tokenizer = BertTokenizer.from_pretrained("mwz/UrduParaphraseBERT") def paraphrase_urdu_sentence(sentence): input_ids = tokenizer.encode(sentence, padding="longest", truncation=True, max_length=512, return_tensors="pt") generated_ids = model.generate(input_ids=input_ids, max_length=128, num_beams=4, no_repeat_ngram_size=2) paraphrase = tokenizer.decode(generated_ids[0], skip_special_tokens=True) return paraphrase sentence = "ایک مثالی روشنی کا مشہور نقطہ آبادی چھوٹی چھوٹی سڑکوں میں اپنے آپ کو خوشگوار کرسکتی ہے۔" paraphrased_sentence = paraphrase_urdu_sentence(sentence) print(paraphrased_sentence) ```
Manab/donut-base-my_model_rapido_2_new_check_3
Manab
2023-07-07T21:17:49Z
46
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:Manab/donut-base-my_model_rapido_2_new_check_2", "base_model:finetune:Manab/donut-base-my_model_rapido_2_new_check_2", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-07T21:05:53Z
--- license: mit base_model: Manab/donut-base-my_model_rapido_2_new_check_2 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-my_model_rapido_2_new_check_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-my_model_rapido_2_new_check_3 This model is a fine-tuned version of [Manab/donut-base-my_model_rapido_2_new_check_2](https://huggingface.co/Manab/donut-base-my_model_rapido_2_new_check_2) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3896 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.8694 | 0.69 | 50 | 2.0758 | | 2.2421 | 1.39 | 100 | 1.7321 | | 1.6972 | 2.08 | 150 | 1.4280 | | 1.5866 | 2.78 | 200 | 1.3896 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
HeshamMamdouh/mbart-finetune-ar-xlsum-fine-tuned
HeshamMamdouh
2023-07-07T21:14:04Z
61
0
transformers
[ "transformers", "tf", "mbart", "text2text-generation", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-07T21:11:33Z
--- tags: - generated_from_keras_callback model-index: - name: mbart-finetune-ar-xlsum-fine-tuned results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetune-ar-xlsum-fine-tuned This model is a fine-tuned version of [eslamxm/mbart-finetune-ar-xlsum](https://huggingface.co/eslamxm/mbart-finetune-ar-xlsum) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.8386 - Validation Loss: 6.2675 - Train Lr: 2e-05 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Lr | Epoch | |:----------:|:---------------:|:--------:|:-----:| | 7.0615 | 6.1894 | 2e-05 | 0 | | 5.7395 | 5.8670 | 2e-05 | 1 | | 5.2896 | 5.7020 | 2e-05 | 2 | | 4.9490 | 5.6279 | 2e-05 | 3 | | 4.6278 | 5.6189 | 2e-05 | 4 | | 4.3330 | 5.6275 | 2e-05 | 5 | | 3.9812 | 5.7291 | 2e-05 | 6 | | 3.6283 | 5.8438 | 2e-05 | 7 | | 3.2183 | 6.0378 | 2e-05 | 8 | | 2.8386 | 6.2675 | 2e-05 | 9 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.13.0 - Datasets 2.13.1 - Tokenizers 0.13.3
TheBloke/Falcon-7B-Instruct-GGML
TheBloke
2023-07-07T21:09:02Z
27
41
transformers
[ "transformers", "falcon", "en", "dataset:tiiuae/falcon-refinedweb", "arxiv:2205.14135", "arxiv:1911.02150", "arxiv:2005.14165", "arxiv:2104.09864", "arxiv:2306.01116", "license:apache-2.0", "region:us" ]
null
2023-06-21T13:32:23Z
--- inference: false datasets: - tiiuae/falcon-refinedweb language: - en widget: - text: "Hey Falcon! Any recommendations for my holidays in Abu Dhabi?" example_title: "Abu Dhabi Trip" - text: "What's the Everett interpretation of quantum mechanics?" example_title: "Q/A: Quantum & Answers" - text: "Give me a list of the top 10 dive sites you would recommend around the world." example_title: "Diving Top 10" - text: "Can you tell me more about deep-water soloing?" example_title: "Extreme sports" - text: "Can you write a short tweet about the Apache 2.0 release of our latest AI model, Falcon LLM?" example_title: "Twitter Helper" - text: "What are the responsabilities of a Chief Llama Officer?" example_title: "Trendy Jobs" license: apache-2.0 --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # TII's Falcon 7B Instruct GGML These files are GGML format model files for [TII's Falcon 7B Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct). These files will **not** work in llama.cpp, text-generation-webui or KoboldCpp. GGCC is a new format created in a new fork of llama.cpp that introduced this new Falcon GGML-based support: [cmp-nc/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp). Currently these files will also not work with code that previously supported Falcon, such as LoLLMs Web UI and ctransformers. But support should be added soon. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/falcon-7B-instruct-GPTQ) * [4, 5, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/falcon-7B-instruct-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/tiiuae/falcon-7b-instruct) ## Prompt template: Falcon ``` User: prompt Assistant: ``` <!-- compatibility_ggml start --> ## Compatibility To build cmp-nct's fork of llama.cpp with Falcon support plus CUDA acceleration, please try the following steps: ``` git clone https://github.com/cmp-nct/ggllm.cpp cd ggllm.cpp rm -rf build && mkdir build && cd build && cmake -DGGML_CUBLAS=1 .. && cmake --build . --config Release ``` Compiling on Windows: developer cmp-nct notes: 'I personally compile it using VScode. When compiling with CUDA support using the Microsoft compiler it's essential to select the "Community edition build tools". Otherwise CUDA won't compile.' Once compiled you can then use `bin/falcon_main` just like you would use llama.cpp. For example: ``` bin/falcon_main -t 8 -ngl 100 -b 1 -m falcon-7b-instruct.ggccv1.q4_0.bin -enc -p "write a story about llamas" ``` Parameter `-enc` should automatically use the right prompt template for the model, so you can just enter your desired prompt. You can specify `-ngl 100` regardles of your VRAM, as it will automatically detect how much VRAM is available to be used. Adjust `-t 8` (the number of CPU cores to use) according to what performs best on your system. Do not exceed the number of physical CPU cores you have. `-b 1` reduces batch size to 1. This slightly lowers prompt evaluation time, but frees up VRAM to load more of the model on to your GPU. If you find prompt evaluation too slow and have enough spare VRAM, you can remove this parameter. Please see https://github.com/cmp-nct/ggllm.cpp for further details and instructions. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | falcon-7b-instruct.ggccv1.q4_0.bin | q4_0 | 4 | 4.06 GB| 6.56 GB | Original quant method, 4-bit. | | falcon-7b-instruct.ggccv1.q4_1.bin | q4_1 | 4 | 4.51 GB| 7.01 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | falcon-7b-instruct.ggccv1.q5_0.bin | q5_0 | 5 | 4.96 GB| 7.46 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | falcon-7b-instruct.ggccv1.q5_1.bin | q5_1 | 5 | 5.42 GB| 7.92 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | falcon-7b-instruct.ggccv1.q8_0.bin | q8_0 | 8 | 7.67 GB| 10.17 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: TII's Falcon 7B Instruct # ✨ Falcon-7B-Instruct **Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.** *Paper coming soon 😊.* 🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)! ## Why use Falcon-7B-Instruct? * **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).** * **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). * **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)). 💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b). 🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother! ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b-instruct" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!** For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon). You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct. # Model Card for Falcon-7B-Instruct ## Model Details ### Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae); - **Model type:** Causal decoder-only; - **Language(s) (NLP):** English and French; - **License:** Apache 2.0; - **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b). ### Model Source - **Paper:** *coming soon*. ## Uses ### Direct Use Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets. ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use. ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b-instruct" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets. | **Data source** | **Fraction** | **Tokens** | **Description** | |--------------------|--------------|------------|-----------------------------------| | [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat | | [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct | | [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct | | [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl | The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer. ## Evaluation *Paper coming soon.* See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results. Note that this model variant is not optimized for NLP benchmarks. ## Technical Specifications For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b). ### Model Architecture and Objective Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences: * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864)); * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)); * **Decoder-block:** parallel attention/MLP with a single layer norm. | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 32 | | | `d_model` | 4544 | Increased to compensate for multiquery | | `head_dim` | 64 | Reduced to optimise for FlashAttention | | Vocabulary | 65024 | | | Sequence length | 2048 | | ### Compute Infrastructure #### Hardware Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances. #### Software Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.) ## Citation *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite: ``` @article{falcon40b, title={{Falcon-40B}: an open large language model with state-of-the-art performance}, author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme}, year={2023} } ``` To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116). ``` @article{refinedweb, title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only}, author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay}, journal={arXiv preprint arXiv:2306.01116}, eprint={2306.01116}, eprinttype = {arXiv}, url={https://arxiv.org/abs/2306.01116}, year={2023} } ``` ## License Falcon-7B-Instruct is made available under the Apache 2.0 license. ## Contact [email protected]
ybkscht/ppo-LunarLander-v2
ybkscht
2023-07-07T21:06:48Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T21:06:27Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 269.17 +/- 12.22 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
C-Lo/finetuning-sentiment-unfiltered-dataset
C-Lo
2023-07-07T21:06:12Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T21:03:56Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: finetuning-sentiment-unfiltered-dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-unfiltered-dataset This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Manab/donut-base-my_model_rapido_2_new_check_2
Manab
2023-07-07T21:01:48Z
49
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-07T20:30:32Z
--- license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-my_model_rapido_2_new_check_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-my_model_rapido_2_new_check_2 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.8819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 9.9118 | 0.69 | 50 | 6.5666 | | 6.0851 | 1.39 | 100 | 4.2864 | | 4.4899 | 2.08 | 150 | 3.3172 | | 3.6628 | 2.78 | 200 | 2.8819 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
guyson/Bluemoon_30b_safetensors_only
guyson
2023-07-07T21:01:18Z
8
0
transformers
[ "transformers", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T17:58:46Z
For my own use, All credit and original model goes to: https://huggingface.co/reeducator/bluemoonrp-30b/tree/main
dracero/ppo-LunarLander-v2
dracero
2023-07-07T20:56:44Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T20:54:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 242.61 +/- 16.03 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Diego's code ```python import gymnasium as gym from stable_baselines3.common.vec_env import DummyVecEnv from stable_baselines3.common.env_util import make_vec_env from huggingface_sb3 import package_to_hub ## TODO: Define a repo_id ## repo_id is the id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name} for instance ThomasSimonini/ppo-LunarLander-v2 repo_id = # TODO: Define the name of the environment env_id = # Create the evaluation env and set the render_mode="rgb_array" eval_env = DummyVecEnv([lambda: Monitor(gym.make(env_id, render_mode="rgb_array"))]) # TODO: Define the model architecture we used model_architecture = "" ## TODO: Define the commit message commit_message = "" # method save, evaluate, generate a model card and record a replay video of your agent before pushing the repo to the hub package_to_hub(model=model, # Our trained model model_name=model_name, # The name of our trained model model_architecture=model_architecture, # The model architecture we used: in our case PPO env_id=env_id, # Name of the environment eval_env=eval_env, # Evaluation Environment repo_id=repo_id, # id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name} for instance ThomasSimonini/ppo-LunarLander-v2 commit_message=commit_message) ... ```
vimassaru/segformer-b2-finetuned-teeth-segmentation
vimassaru
2023-07-07T20:51:12Z
5
0
transformers
[ "transformers", "image-segmentation", "pt", "dataset:vimassaru/teethsegmentation", "endpoints_compatible", "region:us" ]
image-segmentation
2023-07-07T19:33:25Z
--- datasets: - vimassaru/teethsegmentation language: - pt metrics: - mean_iou library_name: transformers pipeline_tag: image-segmentation ---
Tyffuss86/Polsk
Tyffuss86
2023-07-07T20:46:17Z
0
0
null
[ "text-to-video", "region:us" ]
text-to-video
2023-07-07T20:43:27Z
--- pipeline_tag: text-to-video ---
pineiden/nominal-groups-recognition-medical-disease-competencia2-bert-medical-ner
pineiden
2023-07-07T20:10:20Z
133
3
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "es", "license:openrail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-07T14:20:19Z
--- language: - es license: openrail tags: - generated_from_trainer model-index: - name: nominal-groups-recognition-medical-disease-competencia2-bert-medical-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nominal-groups-recognition-medical-disease-competencia2-bert-medical-ner This model is a fine-tuned version of [ukkendane/bert-medical-ner](https://huggingface.co/ukkendane/bert-medical-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3607 - Body Part Precision: 0.6555 - Body Part Recall: 0.7094 - Body Part F1: 0.6814 - Body Part Number: 413 - Disease Precision: 0.6835 - Disease Recall: 0.7067 - Disease F1: 0.6949 - Disease Number: 975 - Family Member Precision: 1.0 - Family Member Recall: 0.6 - Family Member F1: 0.7500 - Family Member Number: 30 - Medication Precision: 0.7647 - Medication Recall: 0.6989 - Medication F1: 0.7303 - Medication Number: 93 - Procedure Precision: 0.5385 - Procedure Recall: 0.5402 - Procedure F1: 0.5393 - Procedure Number: 311 - Overall Precision: 0.6594 - Overall Recall: 0.6767 - Overall F1: 0.6679 - Overall Accuracy: 0.9079 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 13 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.4541 | 1.0 | 8025 | 0.3607 | 0.6555 | 0.7094 | 0.6814 | 413 | 0.6835 | 0.7067 | 0.6949 | 975 | 1.0 | 0.6 | 0.7500 | 30 | 0.7647 | 0.6989 | 0.7303 | 93 | 0.5385 | 0.5402 | 0.5393 | 311 | 0.6594 | 0.6767 | 0.6679 | 0.9079 | | 0.3149 | 2.0 | 16050 | 0.3607 | 0.6555 | 0.7094 | 0.6814 | 413 | 0.6835 | 0.7067 | 0.6949 | 975 | 1.0 | 0.6 | 0.7500 | 30 | 0.7647 | 0.6989 | 0.7303 | 93 | 0.5385 | 0.5402 | 0.5393 | 311 | 0.6594 | 0.6767 | 0.6679 | 0.9079 | | 0.3161 | 3.0 | 24075 | 0.3607 | 0.6555 | 0.7094 | 0.6814 | 413 | 0.6835 | 0.7067 | 0.6949 | 975 | 1.0 | 0.6 | 0.7500 | 30 | 0.7647 | 0.6989 | 0.7303 | 93 | 0.5385 | 0.5402 | 0.5393 | 311 | 0.6594 | 0.6767 | 0.6679 | 0.9079 | | 0.3181 | 4.0 | 32100 | 0.3607 | 0.6555 | 0.7094 | 0.6814 | 413 | 0.6835 | 0.7067 | 0.6949 | 975 | 1.0 | 0.6 | 0.7500 | 30 | 0.7647 | 0.6989 | 0.7303 | 93 | 0.5385 | 0.5402 | 0.5393 | 311 | 0.6594 | 0.6767 | 0.6679 | 0.9079 | | 0.3164 | 5.0 | 40125 | 0.3607 | 0.6555 | 0.7094 | 0.6814 | 413 | 0.6835 | 0.7067 | 0.6949 | 975 | 1.0 | 0.6 | 0.7500 | 30 | 0.7647 | 0.6989 | 0.7303 | 93 | 0.5385 | 0.5402 | 0.5393 | 311 | 0.6594 | 0.6767 | 0.6679 | 0.9079 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
Jade1211/textual_inversion_baby
Jade1211
2023-07-07T20:10:18Z
5
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-07T18:08:50Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - Jade1211/textual_inversion_baby These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
NasimB/gpt2-concat-guten-rarity-no-self-5k-2p5k
NasimB
2023-07-07T20:05:22Z
7
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T16:54:17Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-guten-rarity-no-self-5k-2p5k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-guten-rarity-no-self-5k-2p5k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1753 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7154 | 0.3 | 500 | 5.6337 | | 5.3547 | 0.59 | 1000 | 5.2022 | | 5.0081 | 0.89 | 1500 | 4.9512 | | 4.7302 | 1.19 | 2000 | 4.8080 | | 4.5763 | 1.48 | 2500 | 4.6787 | | 4.4708 | 1.78 | 3000 | 4.5735 | | 4.3233 | 2.08 | 3500 | 4.4955 | | 4.1495 | 2.38 | 4000 | 4.4403 | | 4.1221 | 2.67 | 4500 | 4.3880 | | 4.0727 | 2.97 | 5000 | 4.3314 | | 3.8364 | 3.27 | 5500 | 4.3310 | | 3.8224 | 3.56 | 6000 | 4.2957 | | 3.7974 | 3.86 | 6500 | 4.2621 | | 3.6435 | 4.16 | 7000 | 4.2713 | | 3.5241 | 4.45 | 7500 | 4.2570 | | 3.5226 | 4.75 | 8000 | 4.2443 | | 3.4814 | 5.05 | 8500 | 4.2450 | | 3.332 | 5.34 | 9000 | 4.2494 | | 3.3289 | 5.64 | 9500 | 4.2479 | | 3.3291 | 5.94 | 10000 | 4.2469 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
GalSarid/setfit-movie-genre-sentence-t5-xl
GalSarid
2023-07-07T20:04:50Z
4
1
sentence-transformers
[ "sentence-transformers", "pytorch", "t5", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-04T21:34:54Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # GalSarid/setfit-movie-genre-sentence-t5-xl This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("GalSarid/setfit-movie-genre-sentence-t5-xl") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
anttip/ct2fast-e5-small-v2-hfie
anttip
2023-07-07T20:04:37Z
8
2
transformers
[ "transformers", "bert", "feature-extraction", "ctranslate2", "int8", "float16", "mteb", "en", "arxiv:2212.03533", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2023-07-07T19:30:13Z
--- tags: - ctranslate2 - int8 - float16 - mteb model-index: - name: e5-small-v2 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.59701492537313 - type: ap value: 41.67064885731708 - type: f1 value: 71.86465946398573 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.265875 - type: ap value: 87.67633085349644 - type: f1 value: 91.24297521425744 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 45.882000000000005 - type: f1 value: 45.08058870381236 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 20.697 - type: map_at_10 value: 33.975 - type: map_at_100 value: 35.223 - type: map_at_1000 value: 35.260000000000005 - type: map_at_3 value: 29.776999999999997 - type: map_at_5 value: 32.035000000000004 - type: mrr_at_1 value: 20.982 - type: mrr_at_10 value: 34.094 - type: mrr_at_100 value: 35.343 - type: mrr_at_1000 value: 35.38 - type: mrr_at_3 value: 29.884 - type: mrr_at_5 value: 32.141999999999996 - type: ndcg_at_1 value: 20.697 - type: ndcg_at_10 value: 41.668 - type: ndcg_at_100 value: 47.397 - type: ndcg_at_1000 value: 48.305 - type: ndcg_at_3 value: 32.928000000000004 - type: ndcg_at_5 value: 36.998999999999995 - type: precision_at_1 value: 20.697 - type: precision_at_10 value: 6.636 - type: precision_at_100 value: 0.924 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 14.035 - type: precision_at_5 value: 10.398 - type: recall_at_1 value: 20.697 - type: recall_at_10 value: 66.35799999999999 - type: recall_at_100 value: 92.39 - type: recall_at_1000 value: 99.36 - type: recall_at_3 value: 42.105 - type: recall_at_5 value: 51.991 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 42.1169517447068 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 34.79553720107097 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 58.10811337308168 - type: mrr value: 71.56410763751482 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 78.46834918248696 - type: cos_sim_spearman value: 79.4289182755206 - type: euclidean_pearson value: 76.26662973727008 - type: euclidean_spearman value: 78.11744260952536 - type: manhattan_pearson value: 76.08175262609434 - type: manhattan_spearman value: 78.29395265552289 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 81.63636363636364 - type: f1 value: 81.55779952376953 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.88541137137571 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 30.05205685274407 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.293999999999997 - type: map_at_10 value: 39.876 - type: map_at_100 value: 41.315000000000005 - type: map_at_1000 value: 41.451 - type: map_at_3 value: 37.194 - type: map_at_5 value: 38.728 - type: mrr_at_1 value: 37.053000000000004 - type: mrr_at_10 value: 45.281 - type: mrr_at_100 value: 46.188 - type: mrr_at_1000 value: 46.245999999999995 - type: mrr_at_3 value: 43.228 - type: mrr_at_5 value: 44.366 - type: ndcg_at_1 value: 37.053000000000004 - type: ndcg_at_10 value: 45.086 - type: ndcg_at_100 value: 50.756 - type: ndcg_at_1000 value: 53.123 - type: ndcg_at_3 value: 41.416 - type: ndcg_at_5 value: 43.098 - type: precision_at_1 value: 37.053000000000004 - type: precision_at_10 value: 8.34 - type: precision_at_100 value: 1.346 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 19.647000000000002 - type: precision_at_5 value: 13.877 - type: recall_at_1 value: 30.293999999999997 - type: recall_at_10 value: 54.309 - type: recall_at_100 value: 78.59 - type: recall_at_1000 value: 93.82300000000001 - type: recall_at_3 value: 43.168 - type: recall_at_5 value: 48.192 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.738000000000003 - type: map_at_10 value: 36.925999999999995 - type: map_at_100 value: 38.017 - type: map_at_1000 value: 38.144 - type: map_at_3 value: 34.446 - type: map_at_5 value: 35.704 - type: mrr_at_1 value: 35.478 - type: mrr_at_10 value: 42.786 - type: mrr_at_100 value: 43.458999999999996 - type: mrr_at_1000 value: 43.507 - type: mrr_at_3 value: 40.648 - type: mrr_at_5 value: 41.804 - type: ndcg_at_1 value: 35.478 - type: ndcg_at_10 value: 42.044 - type: ndcg_at_100 value: 46.249 - type: ndcg_at_1000 value: 48.44 - type: ndcg_at_3 value: 38.314 - type: ndcg_at_5 value: 39.798 - type: precision_at_1 value: 35.478 - type: precision_at_10 value: 7.764 - type: precision_at_100 value: 1.253 - type: precision_at_1000 value: 0.174 - type: precision_at_3 value: 18.047 - type: precision_at_5 value: 12.637 - type: recall_at_1 value: 28.738000000000003 - type: recall_at_10 value: 50.659 - type: recall_at_100 value: 68.76299999999999 - type: recall_at_1000 value: 82.811 - type: recall_at_3 value: 39.536 - type: recall_at_5 value: 43.763999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 38.565 - type: map_at_10 value: 50.168 - type: map_at_100 value: 51.11 - type: map_at_1000 value: 51.173 - type: map_at_3 value: 47.044000000000004 - type: map_at_5 value: 48.838 - type: mrr_at_1 value: 44.201 - type: mrr_at_10 value: 53.596999999999994 - type: mrr_at_100 value: 54.211 - type: mrr_at_1000 value: 54.247 - type: mrr_at_3 value: 51.202000000000005 - type: mrr_at_5 value: 52.608999999999995 - type: ndcg_at_1 value: 44.201 - type: ndcg_at_10 value: 55.694 - type: ndcg_at_100 value: 59.518 - type: ndcg_at_1000 value: 60.907 - type: ndcg_at_3 value: 50.395999999999994 - type: ndcg_at_5 value: 53.022999999999996 - type: precision_at_1 value: 44.201 - type: precision_at_10 value: 8.84 - type: precision_at_100 value: 1.162 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 22.153 - type: precision_at_5 value: 15.260000000000002 - type: recall_at_1 value: 38.565 - type: recall_at_10 value: 68.65 - type: recall_at_100 value: 85.37400000000001 - type: recall_at_1000 value: 95.37400000000001 - type: recall_at_3 value: 54.645999999999994 - type: recall_at_5 value: 60.958 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.945 - type: map_at_10 value: 30.641000000000002 - type: map_at_100 value: 31.599 - type: map_at_1000 value: 31.691000000000003 - type: map_at_3 value: 28.405 - type: map_at_5 value: 29.704000000000004 - type: mrr_at_1 value: 25.537 - type: mrr_at_10 value: 32.22 - type: mrr_at_100 value: 33.138 - type: mrr_at_1000 value: 33.214 - type: mrr_at_3 value: 30.151 - type: mrr_at_5 value: 31.298 - type: ndcg_at_1 value: 25.537 - type: ndcg_at_10 value: 34.638000000000005 - type: ndcg_at_100 value: 39.486 - type: ndcg_at_1000 value: 41.936 - type: ndcg_at_3 value: 30.333 - type: ndcg_at_5 value: 32.482 - type: precision_at_1 value: 25.537 - type: precision_at_10 value: 5.153 - type: precision_at_100 value: 0.7929999999999999 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 12.429 - type: precision_at_5 value: 8.723 - type: recall_at_1 value: 23.945 - type: recall_at_10 value: 45.412 - type: recall_at_100 value: 67.836 - type: recall_at_1000 value: 86.467 - type: recall_at_3 value: 34.031 - type: recall_at_5 value: 39.039 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 14.419 - type: map_at_10 value: 20.858999999999998 - type: map_at_100 value: 22.067999999999998 - type: map_at_1000 value: 22.192 - type: map_at_3 value: 18.673000000000002 - type: map_at_5 value: 19.968 - type: mrr_at_1 value: 17.785999999999998 - type: mrr_at_10 value: 24.878 - type: mrr_at_100 value: 26.021 - type: mrr_at_1000 value: 26.095000000000002 - type: mrr_at_3 value: 22.616 - type: mrr_at_5 value: 23.785 - type: ndcg_at_1 value: 17.785999999999998 - type: ndcg_at_10 value: 25.153 - type: ndcg_at_100 value: 31.05 - type: ndcg_at_1000 value: 34.052 - type: ndcg_at_3 value: 21.117 - type: ndcg_at_5 value: 23.048 - type: precision_at_1 value: 17.785999999999998 - type: precision_at_10 value: 4.590000000000001 - type: precision_at_100 value: 0.864 - type: precision_at_1000 value: 0.125 - type: precision_at_3 value: 9.908999999999999 - type: precision_at_5 value: 7.313 - type: recall_at_1 value: 14.419 - type: recall_at_10 value: 34.477999999999994 - type: recall_at_100 value: 60.02499999999999 - type: recall_at_1000 value: 81.646 - type: recall_at_3 value: 23.515 - type: recall_at_5 value: 28.266999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.268 - type: map_at_10 value: 35.114000000000004 - type: map_at_100 value: 36.212 - type: map_at_1000 value: 36.333 - type: map_at_3 value: 32.436 - type: map_at_5 value: 33.992 - type: mrr_at_1 value: 31.761 - type: mrr_at_10 value: 40.355999999999995 - type: mrr_at_100 value: 41.125 - type: mrr_at_1000 value: 41.186 - type: mrr_at_3 value: 37.937 - type: mrr_at_5 value: 39.463 - type: ndcg_at_1 value: 31.761 - type: ndcg_at_10 value: 40.422000000000004 - type: ndcg_at_100 value: 45.458999999999996 - type: ndcg_at_1000 value: 47.951 - type: ndcg_at_3 value: 35.972 - type: ndcg_at_5 value: 38.272 - type: precision_at_1 value: 31.761 - type: precision_at_10 value: 7.103 - type: precision_at_100 value: 1.133 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 16.779 - type: precision_at_5 value: 11.877 - type: recall_at_1 value: 26.268 - type: recall_at_10 value: 51.053000000000004 - type: recall_at_100 value: 72.702 - type: recall_at_1000 value: 89.521 - type: recall_at_3 value: 38.619 - type: recall_at_5 value: 44.671 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.230999999999998 - type: map_at_10 value: 34.227000000000004 - type: map_at_100 value: 35.370000000000005 - type: map_at_1000 value: 35.488 - type: map_at_3 value: 31.496000000000002 - type: map_at_5 value: 33.034 - type: mrr_at_1 value: 30.822 - type: mrr_at_10 value: 39.045 - type: mrr_at_100 value: 39.809 - type: mrr_at_1000 value: 39.873 - type: mrr_at_3 value: 36.663000000000004 - type: mrr_at_5 value: 37.964 - type: ndcg_at_1 value: 30.822 - type: ndcg_at_10 value: 39.472 - type: ndcg_at_100 value: 44.574999999999996 - type: ndcg_at_1000 value: 47.162 - type: ndcg_at_3 value: 34.929 - type: ndcg_at_5 value: 37.002 - type: precision_at_1 value: 30.822 - type: precision_at_10 value: 7.055 - type: precision_at_100 value: 1.124 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 16.591 - type: precision_at_5 value: 11.667 - type: recall_at_1 value: 25.230999999999998 - type: recall_at_10 value: 50.42100000000001 - type: recall_at_100 value: 72.685 - type: recall_at_1000 value: 90.469 - type: recall_at_3 value: 37.503 - type: recall_at_5 value: 43.123 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.604166666666664 - type: map_at_10 value: 32.427166666666665 - type: map_at_100 value: 33.51474999999999 - type: map_at_1000 value: 33.6345 - type: map_at_3 value: 30.02366666666667 - type: map_at_5 value: 31.382333333333328 - type: mrr_at_1 value: 29.001166666666666 - type: mrr_at_10 value: 36.3315 - type: mrr_at_100 value: 37.16683333333333 - type: mrr_at_1000 value: 37.23341666666668 - type: mrr_at_3 value: 34.19916666666667 - type: mrr_at_5 value: 35.40458333333334 - type: ndcg_at_1 value: 29.001166666666666 - type: ndcg_at_10 value: 37.06883333333334 - type: ndcg_at_100 value: 41.95816666666666 - type: ndcg_at_1000 value: 44.501583333333336 - type: ndcg_at_3 value: 32.973499999999994 - type: ndcg_at_5 value: 34.90833333333334 - type: precision_at_1 value: 29.001166666666666 - type: precision_at_10 value: 6.336 - type: precision_at_100 value: 1.0282499999999999 - type: precision_at_1000 value: 0.14391666666666664 - type: precision_at_3 value: 14.932499999999996 - type: precision_at_5 value: 10.50825 - type: recall_at_1 value: 24.604166666666664 - type: recall_at_10 value: 46.9525 - type: recall_at_100 value: 68.67816666666667 - type: recall_at_1000 value: 86.59783333333334 - type: recall_at_3 value: 35.49783333333333 - type: recall_at_5 value: 40.52525000000001 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.559 - type: map_at_10 value: 29.023 - type: map_at_100 value: 29.818 - type: map_at_1000 value: 29.909000000000002 - type: map_at_3 value: 27.037 - type: map_at_5 value: 28.225 - type: mrr_at_1 value: 26.994 - type: mrr_at_10 value: 31.962000000000003 - type: mrr_at_100 value: 32.726 - type: mrr_at_1000 value: 32.800000000000004 - type: mrr_at_3 value: 30.266 - type: mrr_at_5 value: 31.208999999999996 - type: ndcg_at_1 value: 26.994 - type: ndcg_at_10 value: 32.53 - type: ndcg_at_100 value: 36.758 - type: ndcg_at_1000 value: 39.362 - type: ndcg_at_3 value: 28.985 - type: ndcg_at_5 value: 30.757 - type: precision_at_1 value: 26.994 - type: precision_at_10 value: 4.968999999999999 - type: precision_at_100 value: 0.759 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 12.219 - type: precision_at_5 value: 8.527999999999999 - type: recall_at_1 value: 23.559 - type: recall_at_10 value: 40.585 - type: recall_at_100 value: 60.306000000000004 - type: recall_at_1000 value: 80.11 - type: recall_at_3 value: 30.794 - type: recall_at_5 value: 35.186 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.384999999999998 - type: map_at_10 value: 22.142 - type: map_at_100 value: 23.057 - type: map_at_1000 value: 23.177 - type: map_at_3 value: 20.29 - type: map_at_5 value: 21.332 - type: mrr_at_1 value: 19.89 - type: mrr_at_10 value: 25.771 - type: mrr_at_100 value: 26.599 - type: mrr_at_1000 value: 26.680999999999997 - type: mrr_at_3 value: 23.962 - type: mrr_at_5 value: 24.934 - type: ndcg_at_1 value: 19.89 - type: ndcg_at_10 value: 25.97 - type: ndcg_at_100 value: 30.605 - type: ndcg_at_1000 value: 33.619 - type: ndcg_at_3 value: 22.704 - type: ndcg_at_5 value: 24.199 - type: precision_at_1 value: 19.89 - type: precision_at_10 value: 4.553 - type: precision_at_100 value: 0.8049999999999999 - type: precision_at_1000 value: 0.122 - type: precision_at_3 value: 10.541 - type: precision_at_5 value: 7.46 - type: recall_at_1 value: 16.384999999999998 - type: recall_at_10 value: 34.001 - type: recall_at_100 value: 55.17100000000001 - type: recall_at_1000 value: 77.125 - type: recall_at_3 value: 24.618000000000002 - type: recall_at_5 value: 28.695999999999998 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.726 - type: map_at_10 value: 31.227 - type: map_at_100 value: 32.311 - type: map_at_1000 value: 32.419 - type: map_at_3 value: 28.765 - type: map_at_5 value: 30.229 - type: mrr_at_1 value: 27.705000000000002 - type: mrr_at_10 value: 35.085 - type: mrr_at_100 value: 35.931000000000004 - type: mrr_at_1000 value: 36 - type: mrr_at_3 value: 32.603 - type: mrr_at_5 value: 34.117999999999995 - type: ndcg_at_1 value: 27.705000000000002 - type: ndcg_at_10 value: 35.968 - type: ndcg_at_100 value: 41.197 - type: ndcg_at_1000 value: 43.76 - type: ndcg_at_3 value: 31.304 - type: ndcg_at_5 value: 33.661 - type: precision_at_1 value: 27.705000000000002 - type: precision_at_10 value: 5.942 - type: precision_at_100 value: 0.964 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 13.868 - type: precision_at_5 value: 9.944 - type: recall_at_1 value: 23.726 - type: recall_at_10 value: 46.786 - type: recall_at_100 value: 70.072 - type: recall_at_1000 value: 88.2 - type: recall_at_3 value: 33.981 - type: recall_at_5 value: 39.893 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.344 - type: map_at_10 value: 31.636999999999997 - type: map_at_100 value: 33.065 - type: map_at_1000 value: 33.300000000000004 - type: map_at_3 value: 29.351 - type: map_at_5 value: 30.432 - type: mrr_at_1 value: 27.866000000000003 - type: mrr_at_10 value: 35.587 - type: mrr_at_100 value: 36.52 - type: mrr_at_1000 value: 36.597 - type: mrr_at_3 value: 33.696 - type: mrr_at_5 value: 34.713 - type: ndcg_at_1 value: 27.866000000000003 - type: ndcg_at_10 value: 36.61 - type: ndcg_at_100 value: 41.88 - type: ndcg_at_1000 value: 45.105000000000004 - type: ndcg_at_3 value: 33.038000000000004 - type: ndcg_at_5 value: 34.331 - type: precision_at_1 value: 27.866000000000003 - type: precision_at_10 value: 6.917 - type: precision_at_100 value: 1.3599999999999999 - type: precision_at_1000 value: 0.233 - type: precision_at_3 value: 15.547 - type: precision_at_5 value: 10.791 - type: recall_at_1 value: 23.344 - type: recall_at_10 value: 45.782000000000004 - type: recall_at_100 value: 69.503 - type: recall_at_1000 value: 90.742 - type: recall_at_3 value: 35.160000000000004 - type: recall_at_5 value: 39.058 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 20.776 - type: map_at_10 value: 27.285999999999998 - type: map_at_100 value: 28.235 - type: map_at_1000 value: 28.337 - type: map_at_3 value: 25.147000000000002 - type: map_at_5 value: 26.401999999999997 - type: mrr_at_1 value: 22.921 - type: mrr_at_10 value: 29.409999999999997 - type: mrr_at_100 value: 30.275000000000002 - type: mrr_at_1000 value: 30.354999999999997 - type: mrr_at_3 value: 27.418 - type: mrr_at_5 value: 28.592000000000002 - type: ndcg_at_1 value: 22.921 - type: ndcg_at_10 value: 31.239 - type: ndcg_at_100 value: 35.965 - type: ndcg_at_1000 value: 38.602 - type: ndcg_at_3 value: 27.174 - type: ndcg_at_5 value: 29.229 - type: precision_at_1 value: 22.921 - type: precision_at_10 value: 4.806 - type: precision_at_100 value: 0.776 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 11.459999999999999 - type: precision_at_5 value: 8.022 - type: recall_at_1 value: 20.776 - type: recall_at_10 value: 41.294 - type: recall_at_100 value: 63.111 - type: recall_at_1000 value: 82.88600000000001 - type: recall_at_3 value: 30.403000000000002 - type: recall_at_5 value: 35.455999999999996 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 9.376 - type: map_at_10 value: 15.926000000000002 - type: map_at_100 value: 17.585 - type: map_at_1000 value: 17.776 - type: map_at_3 value: 13.014000000000001 - type: map_at_5 value: 14.417 - type: mrr_at_1 value: 20.195 - type: mrr_at_10 value: 29.95 - type: mrr_at_100 value: 31.052000000000003 - type: mrr_at_1000 value: 31.108000000000004 - type: mrr_at_3 value: 26.667 - type: mrr_at_5 value: 28.458 - type: ndcg_at_1 value: 20.195 - type: ndcg_at_10 value: 22.871 - type: ndcg_at_100 value: 29.921999999999997 - type: ndcg_at_1000 value: 33.672999999999995 - type: ndcg_at_3 value: 17.782999999999998 - type: ndcg_at_5 value: 19.544 - type: precision_at_1 value: 20.195 - type: precision_at_10 value: 7.394 - type: precision_at_100 value: 1.493 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 13.073 - type: precision_at_5 value: 10.436 - type: recall_at_1 value: 9.376 - type: recall_at_10 value: 28.544999999999998 - type: recall_at_100 value: 53.147999999999996 - type: recall_at_1000 value: 74.62 - type: recall_at_3 value: 16.464000000000002 - type: recall_at_5 value: 21.004 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.415000000000001 - type: map_at_10 value: 18.738 - type: map_at_100 value: 27.291999999999998 - type: map_at_1000 value: 28.992 - type: map_at_3 value: 13.196 - type: map_at_5 value: 15.539 - type: mrr_at_1 value: 66.5 - type: mrr_at_10 value: 74.518 - type: mrr_at_100 value: 74.86 - type: mrr_at_1000 value: 74.87 - type: mrr_at_3 value: 72.375 - type: mrr_at_5 value: 73.86200000000001 - type: ndcg_at_1 value: 54.37499999999999 - type: ndcg_at_10 value: 41.317 - type: ndcg_at_100 value: 45.845 - type: ndcg_at_1000 value: 52.92 - type: ndcg_at_3 value: 44.983000000000004 - type: ndcg_at_5 value: 42.989 - type: precision_at_1 value: 66.5 - type: precision_at_10 value: 33.6 - type: precision_at_100 value: 10.972999999999999 - type: precision_at_1000 value: 2.214 - type: precision_at_3 value: 48.583 - type: precision_at_5 value: 42.15 - type: recall_at_1 value: 8.415000000000001 - type: recall_at_10 value: 24.953 - type: recall_at_100 value: 52.48199999999999 - type: recall_at_1000 value: 75.093 - type: recall_at_3 value: 14.341000000000001 - type: recall_at_5 value: 18.468 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 47.06499999999999 - type: f1 value: 41.439327599975385 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 66.02 - type: map_at_10 value: 76.68599999999999 - type: map_at_100 value: 76.959 - type: map_at_1000 value: 76.972 - type: map_at_3 value: 75.024 - type: map_at_5 value: 76.153 - type: mrr_at_1 value: 71.197 - type: mrr_at_10 value: 81.105 - type: mrr_at_100 value: 81.232 - type: mrr_at_1000 value: 81.233 - type: mrr_at_3 value: 79.758 - type: mrr_at_5 value: 80.69 - type: ndcg_at_1 value: 71.197 - type: ndcg_at_10 value: 81.644 - type: ndcg_at_100 value: 82.645 - type: ndcg_at_1000 value: 82.879 - type: ndcg_at_3 value: 78.792 - type: ndcg_at_5 value: 80.528 - type: precision_at_1 value: 71.197 - type: precision_at_10 value: 10.206999999999999 - type: precision_at_100 value: 1.093 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 30.868000000000002 - type: precision_at_5 value: 19.559 - type: recall_at_1 value: 66.02 - type: recall_at_10 value: 92.50699999999999 - type: recall_at_100 value: 96.497 - type: recall_at_1000 value: 97.956 - type: recall_at_3 value: 84.866 - type: recall_at_5 value: 89.16199999999999 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 17.948 - type: map_at_10 value: 29.833 - type: map_at_100 value: 31.487 - type: map_at_1000 value: 31.674000000000003 - type: map_at_3 value: 26.029999999999998 - type: map_at_5 value: 28.038999999999998 - type: mrr_at_1 value: 34.721999999999994 - type: mrr_at_10 value: 44.214999999999996 - type: mrr_at_100 value: 44.994 - type: mrr_at_1000 value: 45.051 - type: mrr_at_3 value: 41.667 - type: mrr_at_5 value: 43.032 - type: ndcg_at_1 value: 34.721999999999994 - type: ndcg_at_10 value: 37.434 - type: ndcg_at_100 value: 43.702000000000005 - type: ndcg_at_1000 value: 46.993 - type: ndcg_at_3 value: 33.56 - type: ndcg_at_5 value: 34.687 - type: precision_at_1 value: 34.721999999999994 - type: precision_at_10 value: 10.401 - type: precision_at_100 value: 1.7049999999999998 - type: precision_at_1000 value: 0.22799999999999998 - type: precision_at_3 value: 22.531000000000002 - type: precision_at_5 value: 16.42 - type: recall_at_1 value: 17.948 - type: recall_at_10 value: 45.062999999999995 - type: recall_at_100 value: 68.191 - type: recall_at_1000 value: 87.954 - type: recall_at_3 value: 31.112000000000002 - type: recall_at_5 value: 36.823 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 36.644 - type: map_at_10 value: 57.658 - type: map_at_100 value: 58.562000000000005 - type: map_at_1000 value: 58.62500000000001 - type: map_at_3 value: 54.022999999999996 - type: map_at_5 value: 56.293000000000006 - type: mrr_at_1 value: 73.288 - type: mrr_at_10 value: 80.51700000000001 - type: mrr_at_100 value: 80.72 - type: mrr_at_1000 value: 80.728 - type: mrr_at_3 value: 79.33200000000001 - type: mrr_at_5 value: 80.085 - type: ndcg_at_1 value: 73.288 - type: ndcg_at_10 value: 66.61 - type: ndcg_at_100 value: 69.723 - type: ndcg_at_1000 value: 70.96000000000001 - type: ndcg_at_3 value: 61.358999999999995 - type: ndcg_at_5 value: 64.277 - type: precision_at_1 value: 73.288 - type: precision_at_10 value: 14.17 - type: precision_at_100 value: 1.659 - type: precision_at_1000 value: 0.182 - type: precision_at_3 value: 39.487 - type: precision_at_5 value: 25.999 - type: recall_at_1 value: 36.644 - type: recall_at_10 value: 70.851 - type: recall_at_100 value: 82.94399999999999 - type: recall_at_1000 value: 91.134 - type: recall_at_3 value: 59.230000000000004 - type: recall_at_5 value: 64.997 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 86.00280000000001 - type: ap value: 80.46302061021223 - type: f1 value: 85.9592921596419 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 22.541 - type: map_at_10 value: 34.625 - type: map_at_100 value: 35.785 - type: map_at_1000 value: 35.831 - type: map_at_3 value: 30.823 - type: map_at_5 value: 32.967999999999996 - type: mrr_at_1 value: 23.180999999999997 - type: mrr_at_10 value: 35.207 - type: mrr_at_100 value: 36.315 - type: mrr_at_1000 value: 36.355 - type: mrr_at_3 value: 31.483 - type: mrr_at_5 value: 33.589999999999996 - type: ndcg_at_1 value: 23.195 - type: ndcg_at_10 value: 41.461 - type: ndcg_at_100 value: 47.032000000000004 - type: ndcg_at_1000 value: 48.199999999999996 - type: ndcg_at_3 value: 33.702 - type: ndcg_at_5 value: 37.522 - type: precision_at_1 value: 23.195 - type: precision_at_10 value: 6.526999999999999 - type: precision_at_100 value: 0.932 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 14.308000000000002 - type: precision_at_5 value: 10.507 - type: recall_at_1 value: 22.541 - type: recall_at_10 value: 62.524 - type: recall_at_100 value: 88.228 - type: recall_at_1000 value: 97.243 - type: recall_at_3 value: 41.38 - type: recall_at_5 value: 50.55 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.69949840401279 - type: f1 value: 92.54141471311786 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 72.56041951664386 - type: f1 value: 55.88499977508287 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.62071284465365 - type: f1 value: 69.36717546572152 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.35843981170142 - type: f1 value: 76.15496453538884 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.33664956793118 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 27.883839621715524 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.096874986740758 - type: mrr value: 30.97300481932132 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.4 - type: map_at_10 value: 11.852 - type: map_at_100 value: 14.758 - type: map_at_1000 value: 16.134 - type: map_at_3 value: 8.558 - type: map_at_5 value: 10.087 - type: mrr_at_1 value: 44.272 - type: mrr_at_10 value: 52.05800000000001 - type: mrr_at_100 value: 52.689 - type: mrr_at_1000 value: 52.742999999999995 - type: mrr_at_3 value: 50.205999999999996 - type: mrr_at_5 value: 51.367 - type: ndcg_at_1 value: 42.57 - type: ndcg_at_10 value: 32.449 - type: ndcg_at_100 value: 29.596 - type: ndcg_at_1000 value: 38.351 - type: ndcg_at_3 value: 37.044 - type: ndcg_at_5 value: 35.275 - type: precision_at_1 value: 44.272 - type: precision_at_10 value: 23.87 - type: precision_at_100 value: 7.625 - type: precision_at_1000 value: 2.045 - type: precision_at_3 value: 34.365 - type: precision_at_5 value: 30.341 - type: recall_at_1 value: 5.4 - type: recall_at_10 value: 15.943999999999999 - type: recall_at_100 value: 29.805 - type: recall_at_1000 value: 61.695 - type: recall_at_3 value: 9.539 - type: recall_at_5 value: 12.127 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 36.047000000000004 - type: map_at_10 value: 51.6 - type: map_at_100 value: 52.449999999999996 - type: map_at_1000 value: 52.476 - type: map_at_3 value: 47.452 - type: map_at_5 value: 49.964 - type: mrr_at_1 value: 40.382 - type: mrr_at_10 value: 54.273 - type: mrr_at_100 value: 54.859 - type: mrr_at_1000 value: 54.876000000000005 - type: mrr_at_3 value: 51.014 - type: mrr_at_5 value: 52.983999999999995 - type: ndcg_at_1 value: 40.353 - type: ndcg_at_10 value: 59.11300000000001 - type: ndcg_at_100 value: 62.604000000000006 - type: ndcg_at_1000 value: 63.187000000000005 - type: ndcg_at_3 value: 51.513 - type: ndcg_at_5 value: 55.576 - type: precision_at_1 value: 40.353 - type: precision_at_10 value: 9.418 - type: precision_at_100 value: 1.1440000000000001 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 23.078000000000003 - type: precision_at_5 value: 16.250999999999998 - type: recall_at_1 value: 36.047000000000004 - type: recall_at_10 value: 79.22200000000001 - type: recall_at_100 value: 94.23 - type: recall_at_1000 value: 98.51100000000001 - type: recall_at_3 value: 59.678 - type: recall_at_5 value: 68.967 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 68.232 - type: map_at_10 value: 81.674 - type: map_at_100 value: 82.338 - type: map_at_1000 value: 82.36099999999999 - type: map_at_3 value: 78.833 - type: map_at_5 value: 80.58 - type: mrr_at_1 value: 78.64 - type: mrr_at_10 value: 85.164 - type: mrr_at_100 value: 85.317 - type: mrr_at_1000 value: 85.319 - type: mrr_at_3 value: 84.127 - type: mrr_at_5 value: 84.789 - type: ndcg_at_1 value: 78.63 - type: ndcg_at_10 value: 85.711 - type: ndcg_at_100 value: 87.238 - type: ndcg_at_1000 value: 87.444 - type: ndcg_at_3 value: 82.788 - type: ndcg_at_5 value: 84.313 - type: precision_at_1 value: 78.63 - type: precision_at_10 value: 12.977 - type: precision_at_100 value: 1.503 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.113 - type: precision_at_5 value: 23.71 - type: recall_at_1 value: 68.232 - type: recall_at_10 value: 93.30199999999999 - type: recall_at_100 value: 98.799 - type: recall_at_1000 value: 99.885 - type: recall_at_3 value: 84.827 - type: recall_at_5 value: 89.188 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 45.71879170816294 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 59.65866311751794 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.218 - type: map_at_10 value: 10.337 - type: map_at_100 value: 12.131 - type: map_at_1000 value: 12.411 - type: map_at_3 value: 7.4270000000000005 - type: map_at_5 value: 8.913 - type: mrr_at_1 value: 20.8 - type: mrr_at_10 value: 30.868000000000002 - type: mrr_at_100 value: 31.903 - type: mrr_at_1000 value: 31.972 - type: mrr_at_3 value: 27.367 - type: mrr_at_5 value: 29.372 - type: ndcg_at_1 value: 20.8 - type: ndcg_at_10 value: 17.765 - type: ndcg_at_100 value: 24.914 - type: ndcg_at_1000 value: 30.206 - type: ndcg_at_3 value: 16.64 - type: ndcg_at_5 value: 14.712 - type: precision_at_1 value: 20.8 - type: precision_at_10 value: 9.24 - type: precision_at_100 value: 1.9560000000000002 - type: precision_at_1000 value: 0.32299999999999995 - type: precision_at_3 value: 15.467 - type: precision_at_5 value: 12.94 - type: recall_at_1 value: 4.218 - type: recall_at_10 value: 18.752 - type: recall_at_100 value: 39.7 - type: recall_at_1000 value: 65.57300000000001 - type: recall_at_3 value: 9.428 - type: recall_at_5 value: 13.133000000000001 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.04338850207233 - type: cos_sim_spearman value: 78.5054651430423 - type: euclidean_pearson value: 80.30739451228612 - type: euclidean_spearman value: 78.48377464299097 - type: manhattan_pearson value: 80.40795049052781 - type: manhattan_spearman value: 78.49506205443114 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.11596224442962 - type: cos_sim_spearman value: 76.20997388935461 - type: euclidean_pearson value: 80.56858451349109 - type: euclidean_spearman value: 75.92659183871186 - type: manhattan_pearson value: 80.60246102203844 - type: manhattan_spearman value: 76.03018971432664 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 81.34691640755737 - type: cos_sim_spearman value: 82.4018369631579 - type: euclidean_pearson value: 81.87673092245366 - type: euclidean_spearman value: 82.3671489960678 - type: manhattan_pearson value: 81.88222387719948 - type: manhattan_spearman value: 82.3816590344736 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 81.2836092579524 - type: cos_sim_spearman value: 78.99982781772064 - type: euclidean_pearson value: 80.5184271010527 - type: euclidean_spearman value: 78.89777392101904 - type: manhattan_pearson value: 80.53585705018664 - type: manhattan_spearman value: 78.92898405472994 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.7349907750784 - type: cos_sim_spearman value: 87.7611234446225 - type: euclidean_pearson value: 86.98759326731624 - type: euclidean_spearman value: 87.58321319424618 - type: manhattan_pearson value: 87.03483090370842 - type: manhattan_spearman value: 87.63278333060288 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 81.75873694924825 - type: cos_sim_spearman value: 83.80237999094724 - type: euclidean_pearson value: 83.55023725861537 - type: euclidean_spearman value: 84.12744338577744 - type: manhattan_pearson value: 83.58816983036232 - type: manhattan_spearman value: 84.18520748676501 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.21630882940174 - type: cos_sim_spearman value: 87.72382883437031 - type: euclidean_pearson value: 88.69933350930333 - type: euclidean_spearman value: 88.24660814383081 - type: manhattan_pearson value: 88.77331018833499 - type: manhattan_spearman value: 88.26109989380632 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 61.11854063060489 - type: cos_sim_spearman value: 63.14678634195072 - type: euclidean_pearson value: 61.679090067000864 - type: euclidean_spearman value: 62.28876589509653 - type: manhattan_pearson value: 62.082324165511004 - type: manhattan_spearman value: 62.56030932816679 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.00319882832645 - type: cos_sim_spearman value: 85.94529772647257 - type: euclidean_pearson value: 85.6661390122756 - type: euclidean_spearman value: 85.97747815545827 - type: manhattan_pearson value: 85.58422770541893 - type: manhattan_spearman value: 85.9237139181532 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 79.16198731863916 - type: mrr value: 94.25202702163487 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 54.761 - type: map_at_10 value: 64.396 - type: map_at_100 value: 65.07 - type: map_at_1000 value: 65.09899999999999 - type: map_at_3 value: 61.846000000000004 - type: map_at_5 value: 63.284 - type: mrr_at_1 value: 57.667 - type: mrr_at_10 value: 65.83099999999999 - type: mrr_at_100 value: 66.36800000000001 - type: mrr_at_1000 value: 66.39399999999999 - type: mrr_at_3 value: 64.056 - type: mrr_at_5 value: 65.206 - type: ndcg_at_1 value: 57.667 - type: ndcg_at_10 value: 68.854 - type: ndcg_at_100 value: 71.59100000000001 - type: ndcg_at_1000 value: 72.383 - type: ndcg_at_3 value: 64.671 - type: ndcg_at_5 value: 66.796 - type: precision_at_1 value: 57.667 - type: precision_at_10 value: 9.167 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 25.444 - type: precision_at_5 value: 16.667 - type: recall_at_1 value: 54.761 - type: recall_at_10 value: 80.9 - type: recall_at_100 value: 92.767 - type: recall_at_1000 value: 99 - type: recall_at_3 value: 69.672 - type: recall_at_5 value: 75.083 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.8079207920792 - type: cos_sim_ap value: 94.88470927617445 - type: cos_sim_f1 value: 90.08179959100204 - type: cos_sim_precision value: 92.15481171548117 - type: cos_sim_recall value: 88.1 - type: dot_accuracy value: 99.58613861386138 - type: dot_ap value: 82.94822578881316 - type: dot_f1 value: 77.33333333333333 - type: dot_precision value: 79.36842105263158 - type: dot_recall value: 75.4 - type: euclidean_accuracy value: 99.8069306930693 - type: euclidean_ap value: 94.81367858031837 - type: euclidean_f1 value: 90.01009081735621 - type: euclidean_precision value: 90.83503054989816 - type: euclidean_recall value: 89.2 - type: manhattan_accuracy value: 99.81188118811882 - type: manhattan_ap value: 94.91405337220161 - type: manhattan_f1 value: 90.2763561924258 - type: manhattan_precision value: 92.45283018867924 - type: manhattan_recall value: 88.2 - type: max_accuracy value: 99.81188118811882 - type: max_ap value: 94.91405337220161 - type: max_f1 value: 90.2763561924258 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 58.511599500053094 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 31.984728147814707 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.93428193939015 - type: mrr value: 50.916557911043206 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.562500894537145 - type: cos_sim_spearman value: 31.162587976726307 - type: dot_pearson value: 22.633662187735762 - type: dot_spearman value: 22.723000282378962 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.219 - type: map_at_10 value: 1.871 - type: map_at_100 value: 10.487 - type: map_at_1000 value: 25.122 - type: map_at_3 value: 0.657 - type: map_at_5 value: 1.0699999999999998 - type: mrr_at_1 value: 84 - type: mrr_at_10 value: 89.567 - type: mrr_at_100 value: 89.748 - type: mrr_at_1000 value: 89.748 - type: mrr_at_3 value: 88.667 - type: mrr_at_5 value: 89.567 - type: ndcg_at_1 value: 80 - type: ndcg_at_10 value: 74.533 - type: ndcg_at_100 value: 55.839000000000006 - type: ndcg_at_1000 value: 49.748 - type: ndcg_at_3 value: 79.53099999999999 - type: ndcg_at_5 value: 78.245 - type: precision_at_1 value: 84 - type: precision_at_10 value: 78.4 - type: precision_at_100 value: 56.99999999999999 - type: precision_at_1000 value: 21.98 - type: precision_at_3 value: 85.333 - type: precision_at_5 value: 84.8 - type: recall_at_1 value: 0.219 - type: recall_at_10 value: 2.02 - type: recall_at_100 value: 13.555 - type: recall_at_1000 value: 46.739999999999995 - type: recall_at_3 value: 0.685 - type: recall_at_5 value: 1.13 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 3.5029999999999997 - type: map_at_10 value: 11.042 - type: map_at_100 value: 16.326999999999998 - type: map_at_1000 value: 17.836 - type: map_at_3 value: 6.174 - type: map_at_5 value: 7.979 - type: mrr_at_1 value: 42.857 - type: mrr_at_10 value: 52.617000000000004 - type: mrr_at_100 value: 53.351000000000006 - type: mrr_at_1000 value: 53.351000000000006 - type: mrr_at_3 value: 46.939 - type: mrr_at_5 value: 50.714000000000006 - type: ndcg_at_1 value: 38.775999999999996 - type: ndcg_at_10 value: 27.125 - type: ndcg_at_100 value: 35.845 - type: ndcg_at_1000 value: 47.377 - type: ndcg_at_3 value: 29.633 - type: ndcg_at_5 value: 28.378999999999998 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 24.082 - type: precision_at_100 value: 6.877999999999999 - type: precision_at_1000 value: 1.463 - type: precision_at_3 value: 29.932 - type: precision_at_5 value: 28.571 - type: recall_at_1 value: 3.5029999999999997 - type: recall_at_10 value: 17.068 - type: recall_at_100 value: 43.361 - type: recall_at_1000 value: 78.835 - type: recall_at_3 value: 6.821000000000001 - type: recall_at_5 value: 10.357 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.0954 - type: ap value: 14.216844153511959 - type: f1 value: 54.63687418565117 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.46293152235427 - type: f1 value: 61.744177921638645 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 41.12708617788644 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.75430649102938 - type: cos_sim_ap value: 73.34252536948081 - type: cos_sim_f1 value: 67.53758935173774 - type: cos_sim_precision value: 63.3672525439408 - type: cos_sim_recall value: 72.29551451187335 - type: dot_accuracy value: 81.71305954580676 - type: dot_ap value: 59.5532209082386 - type: dot_f1 value: 56.18466898954705 - type: dot_precision value: 47.830923248053395 - type: dot_recall value: 68.07387862796834 - type: euclidean_accuracy value: 85.81987244441795 - type: euclidean_ap value: 73.34325409809446 - type: euclidean_f1 value: 67.83451360417443 - type: euclidean_precision value: 64.09955388588871 - type: euclidean_recall value: 72.0316622691293 - type: manhattan_accuracy value: 85.68277999642368 - type: manhattan_ap value: 73.1535450121903 - type: manhattan_f1 value: 67.928237896289 - type: manhattan_precision value: 63.56945722171113 - type: manhattan_recall value: 72.9287598944591 - type: max_accuracy value: 85.81987244441795 - type: max_ap value: 73.34325409809446 - type: max_f1 value: 67.928237896289 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.90441262079403 - type: cos_sim_ap value: 85.79331880741438 - type: cos_sim_f1 value: 78.31563529842548 - type: cos_sim_precision value: 74.6683424102779 - type: cos_sim_recall value: 82.33754234678165 - type: dot_accuracy value: 84.89928978926534 - type: dot_ap value: 75.25819218316 - type: dot_f1 value: 69.88730119720536 - type: dot_precision value: 64.23362374959665 - type: dot_recall value: 76.63227594702803 - type: euclidean_accuracy value: 89.01695967710637 - type: euclidean_ap value: 85.98986606038852 - type: euclidean_f1 value: 78.5277880014722 - type: euclidean_precision value: 75.22211253701876 - type: euclidean_recall value: 82.13735756082538 - type: manhattan_accuracy value: 88.99561454573679 - type: manhattan_ap value: 85.92262421793953 - type: manhattan_f1 value: 78.38866094740769 - type: manhattan_precision value: 76.02373028505282 - type: manhattan_recall value: 80.9054511857099 - type: max_accuracy value: 89.01695967710637 - type: max_ap value: 85.98986606038852 - type: max_f1 value: 78.5277880014722 language: - en license: mit duplicated_from: michaelfeil/ct2fast-e5-small-v2 --- # # Hugging Face Inference Endpoints -compatible version of michaelfeil/ct2fast-e5-small-v2 Duplicate of michaelfeil/ct2fast-e5-small-v2, modified to run on Hugging Face Inference Endpoints. Requires a GPU Instance type to run. Creates symbolic links so that ctranslate2 reads the repository model without downloading from HF. # # Fast-Inference with Ctranslate2 Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU. quantized version of [intfloat/e5-small-v2](https://huggingface.co/intfloat/e5-small-v2) ```bash pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.16.0 ``` ```python # from transformers import AutoTokenizer model_name = "michaelfeil/ct2fast-e5-small-v2" model_name_orig="intfloat/e5-small-v2" from hf_hub_ctranslate2 import EncoderCT2fromHfHub model = EncoderCT2fromHfHub( # load in int8 on CUDA model_name_or_path=model_name, device="cuda", compute_type="int8_float16" ) outputs = model.generate( text=["I like soccer", "I like tennis", "The eiffel tower is in Paris"] ) # perform downstream tasks on outputs outputs["pooler_output"] outputs["last_hidden_state"] outputs["attention_mask"] # alternative, use SentenceTransformer Mix-In # for end-to-end Sentence embeddings generation # (not pulling from this CT2fast-HF repo) from hf_hub_ctranslate2 import CT2SentenceTransformer model = CT2SentenceTransformer( model_name_orig, compute_type="int8_float16", device="cuda" ) embeddings = model.encode( ["I like soccer", "I like tennis", "The eiffel tower is in Paris"], batch_size=32, convert_to_numpy=True, normalize_embeddings=True, ) print(embeddings.shape, embeddings) scores = (embeddings @ embeddings.T) * 100 ``` Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2) - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` Converted on 2023-06-19 using ``` ct2-transformers-converter --model intfloat/e5-small-v2 --output_dir ~/tmp-ct2fast-e5-small-v2 --force --copy_files tokenizer.json modules.json README.md tokenizer_config.json sentence_bert_config.json vocab.txt special_tokens_map.json .gitattributes --trust_remote_code ``` # Licence and other remarks: This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo. # Original description # E5-small-v2 [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 12 layers and the embedding size is 384. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-small-v2') model = AutoModel.from_pretrained('intfloat/e5-small-v2') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens. ## Sentence Transformers Below is an example for usage with sentence_transformers. `pip install sentence_transformers~=2.2.2` This is community contributed, and results may vary up to numerical precision. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/e5-small-v2') embeddings = model.encode(input_texts, normalize_embeddings=True) ```
amal94/rl_course_vizdoom_health_gathering_supreme
amal94
2023-07-07T19:57:14Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T18:27:55Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 12.84 +/- 5.56 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r amal94/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
said10/my_test_q_a_demo_model
said10
2023-07-07T19:57:02Z
61
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-07T19:44:10Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: said10/my_test_q_a_demo_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # said10/my_test_q_a_demo_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5327 - Validation Loss: 1.7084 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.4141 | 2.0750 | 0 | | 1.7894 | 1.7084 | 1 | | 1.5327 | 1.7084 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3