modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-24 00:41:46
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
492 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-24 00:41:12
card
stringlengths
11
1.01M
joon09/kor-naver-ner-name-v2
joon09
2023-07-11T05:59:40Z
146
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-24T03:58:09Z
--- tags: - generated_from_trainer model-index: - name: kor-naver-ner-name-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kor-naver-ner-name-v2 This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.0+cu117 - Datasets 2.2.2 - Tokenizers 0.13.3
mrovejaxd/ABL_d
mrovejaxd
2023-07-11T05:58:29Z
103
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-11T05:04:55Z
--- tags: - generated_from_trainer model-index: - name: ABL_d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ABL_d This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cpu - Datasets 2.12.0 - Tokenizers 0.13.2
saillab/Nepali_Alpaca_ChatGPT_33B
saillab
2023-07-11T05:46:37Z
0
0
null
[ "region:us" ]
null
2023-07-09T05:08:20Z
# Nepali-Chat-GPT 33B 🇳🇵 This repo consist of the demo and the necessary guidelines for creating the open source Nepali Alpaca ChatGPT. As per our knowledge this is the first open-source large language model created specifically for **Nepali Language**. ## Model **🇳🇵Nepali Alpaca Chat-GPT-33B**: For this model we finetuned the timdettmers/guanaco-33b-merged with Nepali-Alpaca-GPT4 dataset. **🇳🇵33B Model Generated Output** ``` ### Instruction: आर्टिफिसियल इन्टेलिजेन्स भनेको के हो ### Output: आर्टिफिसियल इन्टेलिजेन्स, सामान्यतया एआई भनेर चिनिन्छ, कम्प्युटर विज्ञानको एउटा शाखा हो जसले मानिसले जस्तै सोच्न र काम गर्न सक्ने कम्प्युटर प्रणालीहरू विकास गर्ने प्रक्रियालाई बुझाउँछ। यसमा मेसिन लर्निङ, न्यूरल नेटवर्कहरू, र प्राकृतिक भाषा प्रशोधन जस्ता विभिन्न प्रविधिहरू समावेश हुन्छन् जसले कम्प्युटरहरूलाई स्पष्ट रूपमा प्रोग्राम नगरी सिक्न र समस्याहरू समाधान गर्न सक्षम बनाउन मद्दत गर्दछ। ### Instruction: वातावरण कसरी जोगाउने ? ### Output: त्यहाँ धेरै तरिकाहरू छन् जुन हामी वातावरण जोगाउन सक्छौं। यहाँ केही सुझावहरू छन् 1. घटाउनुहोस्, पुन: प्रयोग गर्नुहोस्, र पुन: प्रयोग गर्नुहोस् घटाउनुहोस्, पुन: प्रयोग गर्नुहोस्, र पुन: प्रयोग गर्नुहोस् भनेको फोहोर उत्पादन घटाउने, पुन: प्रयोग गर्ने, र पुन: प्रयोग गर्ने सबैभन्दा प्रभावकारी तरिकाहरू मध्ये एक हो। यसले घटाउने, पुन: प्रयोग गर्ने र पुन: प्रयोग गर्नेलाई प्रोत्साहन दिन्छ र वातावरणमा नकारात्मक प्रभावहरू कम गर्न मद्दत गर्दछ। २. ऊर्जा खपत घटाउनुहोस्: ऊर्जा खपत घटाउनु भनेको ऊर्जा-कुशल उपकरणहरू प्रयोग गर्नु, प्रयोगमा नभएको बेला बत्तीहरू र इलेक्ट्रोनिक्सहरू बन्द गर्नु, र प्रयोगमा नभएको इलेक्ट्रोनिक्स र उपकरणहरू पुन: प्रयोग गर्ने जस्ता कदमहरू चाल्नु हो। यसले ऊर्जा खपत घटाउने र वातावरणमा नकारात्मक प्रभावहरू घटाउने एक प्रभावकारी तरिका हो। 3. एकल-प्रयोग प्लास्टिक घटाउनुहोस्: एकल-प्रयोग प्लास्टिक घटाउनु भनेको एकल-प्रयोग प्लास्टिक सामग्रीहरू प्रयोग गर्नु अघि उनीहरूलाई पुन: प्रयोग गर्ने ``` ### How to contribute? We highly encourage you to contribute to this open-source project. Please feel free to fill this Google Form: https://forms.gle/CRN1KJsjNpNsfSR66
ancolin/distilbert-base-uncased-finetuned-emotion
ancolin
2023-07-11T05:45:13Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-10T07:05:55Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.92 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2281 - Accuracy: 0.92 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8512 | 1.0 | 250 | 0.3233 | 0.9045 | | 0.2569 | 2.0 | 500 | 0.2281 | 0.92 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
saillab/Nepali_Alpaca_ChatGPT_13B
saillab
2023-07-11T05:44:42Z
0
0
null
[ "region:us" ]
null
2023-07-06T21:36:45Z
# Nepali-Chat-GPT 13B 🇳🇵 This repo consist of the demo and the necessary guidelines for creating the open source Nepali Alpaca ChatGPT. As per our knowledge this is the first open-source large language model created specifically for **Nepali Language**. ## Model **🇳🇵Nepali Alpaca Chat-GPT-13B**: We finetuned the TheBloke/stable-vicuna-13B-GPTQ model with the Nepali-Alapaca-GPT4 dataset. **🇳🇵13B Model Generated Output** ``` ### Instruction: कृपया मेसिन लर्निङको व्याख्या गर्नुहोस्। ### Output: मेसिन लर्निङ भनेको डाटाबाट सिक्ने कम्प्युटर-आधारित अनुप्रयोगहरू जस्तै मेसिन लर्निङ संरचनाहरू द्वारा प्रशिक्षण गरिएको डेटा प्रयोग गरी सञ्चालन गर्ने विशेष प्रकारको कम्प्युटर प्रणाली हो। यस प्रकारको कम्प्युटर प्रणालीले स्थानान्तरण र प्रशिक्षण गर्न डेटा प्रयोग गर्छ, र यसको कार्यसम्पादनलाई अनुमानित रूपमा सुधार गर्न सक्छ। मेसिन लर्निङ प्रणालीहरूले उनीहरूको लिङ्क र अनुप्रयोगहरू सिक्ने र प्रशिक्षण गर्ने डेटामा सुधार गर्न सक्छन्, सामान्यतया व्यक्तिगत प्रविधिहरू जस्तै चिकनहरू पहिचान गर्ने र इन्स्ट्रुक्टरहरू व्यवहार गर्ने समावेश गर्दछ। मेसिन लर्निङ प्रणालीहरू विभिन्न उद्योगहरूमा फाइदा पुर्‍याउन सक्छन्, जस्तै स्वास्थ्य सेवा, उद्योग, र वित्त, र यसलाई धेरै प्रयोगकर्ता अनुभवहरू र स्रोतहरू प्रयोग गरेर विकास गरिएको छ। ### Instruction: प्राकृतिक स्रोतहरू र संरक्षण गर्न प्रविधिहरू प्रवर्द्धन गर्नुहोस्।` ### Output: यसमा ऊर्जा परिणाम बढाउने टेक्नोलोजी, सौर्य ईन्धन र फोहोर उपकरणहरू, र नवीकरणीय आविष्कार र प्रविधिहरू समावेश छन्। रोजगार र विकासमा अन्तर्राष्ट्रिय व्यवस्था र अन्तर्राष्ट्रिय घटना रणनीतिहरू लागू गर्नुहोस्। २. पारिस्थितिक र फसलहरूको लागि बाहिर निकाल्नुहोस्। जनसंख्या, उत्सर्जन, र कार्बन उत्सर्जन कम गर्ने सहयोग र अन्तर्राष्ट्रिय साझेदारीहरूको माध्यमबाट सम्बोधन गर्नुहोस्। 3. मानव संसाधनहरूमा दक्षता प्रवर्द्धन गर्नुहोस्। समुद्री र जलीय पारिस्थितिहरूको सुरक्षा, संकुचन, र स्रोतहरू रोक्न मानव व्यवहार र प्रविधिहरूको प्रयोग सुधार गर्नुहोस्। 4. नवीकरणीय प्रविधि र उपकरणहरूको अनुप्रयोग बढाउनुहोस्। सार्वजनिक यातायात, नवीकरणीय ऊर्जा स्रोतहरू, र नवीकरणीय इन्धन स्रोतहरू समावेश छन्।` ``` ### How to contribute? We highly encourage you to contribute to this open-source project. Please feel free to fill this Google Form: https://forms.gle/CRN1KJsjNpNsfSR66
saillab/Nepali_Alpaca_ChatGPT_7B
saillab
2023-07-11T05:37:08Z
0
0
null
[ "region:us" ]
null
2023-07-10T04:29:09Z
# Nepali-Chat-GPT 7B 🇳🇵 This repo consist of the demo and the necessary guidelines for creating the open source Nepali Alpaca ChatGPT. As per our knowledge this is the first open-source large language model created specifically for **Nepali Language**. ## Model **🇳🇵Nepali Alpaca Chat-GPT-7B**: We have finetuned the timdettmers/guanaco-7b model with the Nepali-Alpaca-GPT4 dataset. HuggingFace Link: https://huggingface.co/saillab/Nepali_Alpaca_ChatGPT_7B **🇳🇵7B Model Generated Output** ``` ### Instruction:आर्टिफिशियल इन्टेलिजेन्सको व्याख्या गर्नुहोस्। ### Output: आर्टिफिसियल इन्टेलिजेन्स, प्रायः एआई भनेर चिनिन्छ, कम्प्युटर प्रणालीहरूको विकास हो जसले सामान्यतया मानव बुद्धिको आवश्यकता पर्ने कार्यहरू गर्न सक्छ। यसले कम्प्युटरहरूलाई स्पष्ट रूपमा प्रोग्राम नगरी सिक्न र सिक्न सक्ने क्षमता प्रदान गर्दछ, जसले तिनीहरूलाई स्वचालित रूपमा समस्याहरू समाधान गर्न र निर्णय गर्न सक्षम बनाउँछ। यसले कम्प्युटर प्रणालीहरूलाई मानिसले जस्तै सोच्न र निर्णय गर्न सक्षम बनाउँछ। यसले स्वास्थ्य सेवा उद्योगमा क्रान्तिकारी परिवर्तन गर्ने क्षमता प्रदान गर्दछ ### Instruction: विश्वका दुर्लभ भाषाहरूको संरक्षणको महत्त्व तीन वाक्यमा व्याख्या गर्नुहोस्। ### Output: विश्वका दुर्लभ भाषाहरूको संरक्षण धेरै कारणहरूको लागि महत्त्वपूर्ण छ। पहिलो, यसले संस्कृतिको विविधतालाई बढावा दिन्छ र विश्वभरका मानिसहरूलाई विभिन्न भाषाहरू सिक्न अनुमति दिन्छ। दोस्रो, संरक्षित भाषाहरूको संरक्षणले विगतका भाषाहरूलाई जोगाउँछ जुन अझै पनि प्रायः प्रयोग हुन्छन्। तेस्रो, संरक्षित भाषाहरूको संरक्षणले विभिन्न संस्कृतिका सांस्कृतिक सामग्रीहरूको संरक्षण र संरक्षण गर्दछ, जसले संस्कृतिको विविधतालाई बढावा दिन्छ। ``` ### How to contribute? We highly encourage you to contribute to this open-source project. Please feel free to fill this Google Form: https://forms.gle/CRN1KJsjNpNsfSR66
Sigwang/pegasus-samsum
Sigwang
2023-07-11T05:25:51Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:samsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-11T04:18:02Z
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4848 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6909 | 0.54 | 500 | 1.4848 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
mrovejaxd/ABL_c
mrovejaxd
2023-07-11T05:25:22Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-11T02:51:54Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: ABL_c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ABL_c This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7496 - Accuracy: 0.7192 - F1: 0.7184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
nomsgadded/pokemon-lora
nomsgadded
2023-07-11T05:25:03Z
2
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-11T03:46:05Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - nomsgadded/pokemon-lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
voyzan/v_arcobot_A01
voyzan
2023-07-11T05:00:34Z
0
0
stable-baselines3
[ "stable-baselines3", "Acrobot-v1", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-11T04:17:45Z
--- library_name: stable-baselines3 tags: - Acrobot-v1 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Acrobot-v1 type: Acrobot-v1 metrics: - type: mean_reward value: -77.65 +/- 9.14 name: mean_reward verified: false --- # **PPO** Agent playing **Acrobot-v1** This is a trained model of a **PPO** agent playing **Acrobot-v1** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
yzzhong/ppo-LunarLander
yzzhong
2023-07-11T04:51:36Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-11T04:51:15Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 259.17 +/- 16.65 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
alsonlai/dqn-SpaceInvadersNoFrameskip-v4
alsonlai
2023-07-11T03:37:58Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-11T03:37:27Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 499.50 +/- 146.57 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alsonlai -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alsonlai -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga alsonlai ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
NasimB/gpt2-dp-mod-datasets-txt-processing
NasimB
2023-07-11T03:26:10Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-10T13:17:49Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-dp-mod-datasets-txt-processing results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-dp-mod-datasets-txt-processing This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3134 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.694 | 0.28 | 500 | 5.6630 | | 5.3512 | 0.55 | 1000 | 5.2372 | | 5.0119 | 0.83 | 1500 | 4.9763 | | 4.767 | 1.1 | 2000 | 4.8279 | | 4.5688 | 1.38 | 2500 | 4.7089 | | 4.4767 | 1.65 | 3000 | 4.6105 | | 4.3893 | 1.93 | 3500 | 4.5220 | | 4.1792 | 2.21 | 4000 | 4.4846 | | 4.1211 | 2.48 | 4500 | 4.4302 | | 4.08 | 2.76 | 5000 | 4.3699 | | 4.0158 | 3.03 | 5500 | 4.3318 | | 3.7873 | 3.31 | 6000 | 4.3214 | | 3.7888 | 3.58 | 6500 | 4.2912 | | 3.7709 | 3.86 | 7000 | 4.2590 | | 3.6276 | 4.13 | 7500 | 4.2642 | | 3.4947 | 4.41 | 8000 | 4.2579 | | 3.4884 | 4.69 | 8500 | 4.2439 | | 3.4836 | 4.96 | 9000 | 4.2315 | | 3.3261 | 5.24 | 9500 | 4.2430 | | 3.2961 | 5.51 | 10000 | 4.2427 | | 3.2947 | 5.79 | 10500 | 4.2419 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
sharpbai/Baichuan-13B-Chat
sharpbai
2023-07-11T03:23:03Z
17
0
transformers
[ "transformers", "pytorch", "baichuan", "text-generation", "custom_code", "zh", "en", "arxiv:2104.09864", "arxiv:2108.12409", "arxiv:2009.03300", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-11T02:54:41Z
--- language: - zh - en pipeline_tag: text-generation inference: false --- # Baichuan-13B-Chat *The weight file is split into chunks with a size of 650MB for convenient and fast parallel downloads* A 650MB split weight version of [baichuan-inc/Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat) The original model card is down below ----------------------------------------- # Baichuan-13B-Chat <!-- Provide a quick summary of what the model is/does. --> ## 介绍 Baichuan-13B-Chat为Baichuan-13B系列模型中对齐后的版本,预训练模型可见[Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base)。 [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) 是由百川智能继 [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) 之后开发的包含 130 亿参数的开源可商用的大规模语言模型,在权威的中文和英文 benchmark 上均取得同尺寸最好的效果。本次发布包含有预训练 ([Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base)) 和对齐 ([Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat)) 两个版本。Baichuan-13B 有如下几个特点: 1. **更大尺寸、更多数据**:Baichuan-13B 在 [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) 的基础上进一步扩大参数量到 130 亿,并且在高质量的语料上训练了 1.4 万亿 tokens,超过 LLaMA-13B 40%,是当前开源 13B 尺寸下训练数据量最多的模型。支持中英双语,使用 ALiBi 位置编码,上下文窗口长度为 4096。 2. **同时开源预训练和对齐模型**:预训练模型是适用开发者的“基座”,而广大普通用户对有对话功能的对齐模型具有更强的需求。因此本次开源我们同时发布了对齐模型(Baichuan-13B-Chat),具有很强的对话能力,开箱即用,几行代码即可简单的部署。 3. **更高效的推理**:为了支持更广大用户的使用,我们本次同时开源了 int8 和 int4 的量化版本,相对非量化版本在几乎没有效果损失的情况下大大降低了部署的机器资源门槛,可以部署在如 Nvidia 3090 这样的消费级显卡上。 4. **开源免费可商用**:Baichuan-13B 不仅对学术研究完全开放,开发者也仅需邮件申请并获得官方商用许可后,即可以免费商用。 Baichuan-13B-Chat is the aligned version in the Baichuan-13B series of models, and the pre-trained model can be found at [Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base). [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) is an open-source, commercially usable large-scale language model developed by Baichuan Intelligence, following [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B). With 13 billion parameters, it achieves the best performance in standard Chinese and English benchmarks among models of its size. This release includes two versions: pre-training (Baichuan-13B-Base) and alignment (Baichuan-13B-Chat). Baichuan-13B has the following features: 1. **Larger size, more data**: Baichuan-13B further expands the parameter volume to 13 billion based on [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B), and has trained 1.4 trillion tokens on high-quality corpora, exceeding LLaMA-13B by 40%. It is currently the model with the most training data in the open-source 13B size. It supports both Chinese and English, uses ALiBi position encoding, and has a context window length of 4096. 2. **Open-source pre-training and alignment models simultaneously**: The pre-training model is a "base" suitable for developers, while the general public has a stronger demand for alignment models with dialogue capabilities. Therefore, in this open-source release, we also released the alignment model (Baichuan-13B-Chat), which has strong dialogue capabilities and is ready to use. It can be easily deployed with just a few lines of code. 3. **More efficient inference**: To support a wider range of users, we have open-sourced the INT8 and INT4 quantized versions. The model can be conveniently deployed on consumer GPUs like the Nvidia 3090 with almost no performance loss. 4. **Open-source, free, and commercially usable**: Baichuan-13B is not only fully open to academic research, but developers can also use it for free commercially after applying for and receiving official commercial permission via email. ## 使用方式 如下是一个使用Baichuan-13B-Chat进行对话的示例,正确输出为"乔戈里峰。世界第二高峰———乔戈里峰西方登山者称其为k2峰,海拔高度是8611米,位于喀喇昆仑山脉的中巴边境上" ```python import torch from transformers import AutoModel, AutoTokenizer from transformers.generation.utils import GenerationConfig tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-13B-Chat", use_fast=False, trust_remote_code=True) model = AutoModel.from_pretrained("baichuan-inc/Baichuan-13B-Chat", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True) model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan-13B-Chat") messages = [] messages.append({"role": "user", "content": "世界上第二高的山峰是哪座"}) response = model.chat(tokenizer, messages) print(response) ``` Here is an example of a conversation using Baichuan-13B-Chat, the correct output is "K2. The world's second highest peak - K2, also known as Mount Godwin-Austen or Chhogori, with an altitude of 8611 meters, is located on the China-Pakistan border in the Karakoram Range." ```python import torch from transformers import AutoModel, AutoTokenizer from transformers.generation.utils import GenerationConfig tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-13B-Chat", use_fast=False, trust_remote_code=True) model = AutoModel.from_pretrained("baichuan-inc/Baichuan-13B-Chat", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True) model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan-13B-Chat") messages = [] messages.append({"role": "user", "content": "Which moutain is the second highest one in the world?"}) response = model.chat(tokenizer, messages) print(response) ``` ## 模型详情 ### 模型描述 <!-- Provide a longer summary of what this model is. --> - **Developed by:** 百川智能(Baichuan Intelligent Technology) - **Email**: [email protected] - **Language(s) (NLP):** Chinese/English - **License:** 【Community License for Baichuan-13B Model】([ZH](Baichuan-13B%20%E6%A8%A1%E5%9E%8B%E5%95%86%E7%94%A8%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)| [EN](Community%20License%20for%20Baichuan-13B%20Model.pdf)) **商业用途(For commercial use):** 请通过上述Email联系申请书面授权。(Contact us via Email above to apply for written authorization.) ### 模型结构 <!-- Provide the basic links for the model. --> 整体模型基于Baichuan-7B,为了获得更好的推理性能,Baichuan-13B 使用了 ALiBi 线性偏置技术,相对于 Rotary Embedding 计算量更小,对推理性能有显著提升;与标准的 LLaMA-13B 相比,生成 2000 个 tokens 的平均推理速度 (tokens/s),实测提升 31.6%: | Model | tokens/s | |-------------|----------| | LLaMA-13B | 19.4 | | Baichuan-13B| 25.4 | 具体参数和见下表 | 模型名称 | 隐含层维度 | 层数 | 头数 |词表大小 | 总参数量 | 训练数据(tokens) | 位置编码 | 最大长度 | |-------------------------|-------|------------|------------|-----------------|--------|--------|----------------|---------| | Baichuan-7B | 4,096 | 32 | 32 | 64,000 | 7,000,559,616 | 1.2万亿 | [RoPE](https://arxiv.org/abs/2104.09864) | 4,096 | | Baichuan-13B | 5,120 | 40 | 40 | 64,000 | 13,264,901,120 | 1.4万亿 | [ALiBi](https://arxiv.org/abs/2108.12409) | 4,096 The overall model is based on Baichuan-7B. In order to achieve better inference performance, Baichuan-13B uses ALiBi linear bias technology, which has a smaller computational load compared to Rotary Embedding, and significantly improves inference performance. Compared with the standard LLaMA-13B, the average inference speed (tokens/s) for generating 2000 tokens has been tested to increase by 31.6%: | Model | tokens/s | |-------------|----------| | LLaMA-13B | 19.4 | | Baichuan-13B| 25.4 | The specific parameters are as follows: | Model Name | Hidden Size | Num Layers | Num Attention Heads |Vocab Size | Total Params | Training Dats(tokens) | Position Embedding | Max Length | |-------------------------|-------|------------|------------|-----------------|--------|--------|----------------|---------| | Baichuan-7B | 4,096 | 32 | 32 | 64,000 | 7,000,559,616 | 1.2万亿 | [RoPE](https://arxiv.org/abs/2104.09864) | 4,096 | | Baichuan-13B | 5,120 | 40 | 40 | 64,000 | 13,264,901,120 | 1.4万亿 | [ALiBi](https://arxiv.org/abs/2108.12409) | 4,096 ## 使用须知 <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### 免责声明 我们在此声明,我们的开发团队并未基于 Baichuan-13B 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用 Baichuan-13B 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan-13B 模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。 我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用 Baichuan-13B 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 We hereby declare that our development team has not developed any applications based on the Baichuan-13B model, whether on iOS, Android, the web, or any other platform. We strongly urge all users not to use the Baichuan-13B model for any activities that harm national social security or are illegal. In addition, we also ask users not to use the Baichuan-13B model for internet services that have not undergone appropriate security review and filing. We hope that all users will adhere to this principle to ensure that technological development takes place in a regulated and legal environment. We have done our utmost to ensure the compliance of the data used in the model training process. However, despite our great efforts, due to the complexity of the model and data, there may still be some unforeseen issues. Therefore, we will not take any responsibility for any issues arising from the use of the Baichuan-13B open-source model, including but not limited to data security issues, public opinion risks, or any risks and problems arising from the model being misled, misused, disseminated, or improperly exploited. ## 训练详情 训练具体设置参见[Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B)。 For specific training settings, please refer to [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B). ## 测评结果 ## [C-Eval](https://cevalbenchmark.com/index.html#home) | Model 5-shot | STEM | Social Sciences | Humanities | Others | Average | |-------------------------|:-----:|:---------------:|:----------:|:------:|:-------:| | Baichuan-7B | 38.2 | 52.0 | 46.2 | 39.3 | 42.8 | | Chinese-Alpaca-Plus-13B | 35.2 | 45.6 | 40.0 | 38.2 | 38.8 | | Chinese-LLaMA-Plus-13B | 30.3 | 38.0 | 32.9 | 29.1 | 32.1 | | Ziya-LLaMA-13B-Pretrain | 27.6 | 34.4 | 32.0 | 28.6 | 30.0 | | LLaMA-13B | 27.0 | 33.6 | 27.7 | 27.6 | 28.5 | | moss-moon-003-base (16B)| 27.0 | 29.1 | 27.2 | 26.9 | 27.4 | | vicuna-13B | 22.8 | 24.8 | 22.3 | 18.5 | 22.2 | | **Baichuan-13B-Base** | **45.9** | **63.5** | **57.2** | **49.3** | **52.4** | | **Baichuan-13B-Chat** | **43.7** | **64.6** | **56.2** | **49.2** | **51.5** | ## [MMLU](https://arxiv.org/abs/2009.03300) | Model 5-shot | STEM | Social Sciences | Humanities | Others | Average | |-------------------------|:-----:|:---------------:|:----------:|:------:|:-------:| | LLaMA-13B | 36.1 | 53.0 | 44.0 | 52.8 | 46.3 | | Chinese-Alpaca-Plus-13B | 36.9 | 48.9 | 40.5 | 50.5 | 43.9 | | Ziya-LLaMA-13B-Pretrain | 35.6 | 47.6 | 40.1 | 49.4 | 42.9 | | Baichuan-7B | 35.6 | 48.9 | 38.4 | 48.1 | 42.3 | | Chinese-LLaMA-Plus-13B | 33.1 | 42.8 | 37.0 | 44.6 | 39.2 | | vicuna-13B | 24.2 | 24.1 | 24.6 | 26.8 | 24.9 | | moss-moon-003-base (16B)| 22.4 | 22.8 | 24.2 | 24.4 | 23.6 | | **Baichuan-13B-Base** | **41.6** | **60.9** | **47.4** | **58.5** | **51.6** | | **Baichuan-13B-Chat** | **40.9** | **60.9** | **48.8** | **59.0** | **52.1** | > 说明:我们采用了 MMLU 官方的[评测方案](https://github.com/hendrycks/test)。 ## [CMMLU](https://github.com/haonan-li/CMMLU) | Model 5-shot | STEM | Humanities | Social Sciences | Others | China Specific | Average | |-------------------------|:-----:|:----------:|:---------------:|:------:|:--------------:|:-------:| | Baichuan-7B | 34.4 | 47.5 | 47.6 | 46.6 | 44.3 | 44.0 | | Chinese-Alpaca-Plus-13B | 29.8 | 33.4 | 33.2 | 37.9 | 32.1 | 33.4 | | Chinese-LLaMA-Plus-13B | 28.1 | 33.1 | 35.4 | 35.1 | 33.5 | 33.0 | | Ziya-LLaMA-13B-Pretrain | 29.0 | 30.7 | 33.8 | 34.4 | 31.9 | 32.1 | | LLaMA-13B | 29.2 | 30.8 | 31.6 | 33.0 | 30.5 | 31.2 | | moss-moon-003-base (16B)| 27.2 | 30.4 | 28.8 | 32.6 | 28.7 | 29.6 | | vicuna-13B | 24.0 | 25.4 | 25.3 | 25.0 | 25.0 | 24.9 | | **Baichuan-13B-Base** | **41.7** | **61.1** | **59.8** | **59.0** | **56.4** | **55.3** | | **Baichuan-13B-Chat** | **42.8** | **62.6** | **59.7** | **59.0** | **56.1** | **55.8** | > 说明:CMMLU 是一个综合性的中文评估基准,专门用于评估语言模型在中文语境下的知识和推理能力。我们采用了其官方的[评测方案](https://github.com/haonan-li/CMMLU)。 ## 微信群组 ![WeChat](https://github.com/baichuan-inc/Baichuan-13B/blob/main/media/wechat.jpeg?raw=true)
SpringYung/dolly_with_10latex_v2
SpringYung
2023-07-11T03:17:46Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-11T03:17:08Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
shikras/shikra-7b-delta-v1-0708
shikras
2023-07-11T03:07:55Z
58
3
transformers
[ "transformers", "pytorch", "shikra", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-10T15:57:55Z
Shikra-7B-v1-0708: A frequently updated ckpt for Shikra-7B-v1 --- license: cc-by-nc-4.0 datasets: added A-OKVQA dataset for Multiple Choice Question format training
asenella/mmnist_MMVAEPlusconfig_adapted_resnets_seed_0_ratio_0_c
asenella
2023-07-11T03:00:48Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-07-11T03:00:34Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
sharpbai/Baichuan-13B-Base
sharpbai
2023-07-11T02:46:16Z
15
0
transformers
[ "transformers", "pytorch", "baichuan", "text-generation", "custom_code", "zh", "en", "arxiv:2104.09864", "arxiv:2108.12409", "arxiv:2009.03300", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-11T02:37:52Z
--- language: - zh - en pipeline_tag: text-generation inference: false --- # Baichuan-13B-Base *The weight file is split into chunks with a size of 650MB for convenient and fast parallel downloads* A 650MB split weight version of [baichuan-inc/Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base) The original model card is down below ----------------------------------------- # Baichuan-13B-Base <!-- Provide a quick summary of what the model is/does. --> ## 介绍 Baichuan-13B-Base为Baichuan-13B系列模型中的预训练版本,经过对齐后的模型可见[Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat)。 [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) 是由百川智能继 [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) 之后开发的包含 130 亿参数的开源可商用的大规模语言模型,在权威的中文和英文 benchmark 上均取得同尺寸最好的效果。本次发布包含有预训练 ([Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base)) 和对齐 ([Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat)) 两个版本。Baichuan-13B 有如下几个特点: 1. **更大尺寸、更多数据**:Baichuan-13B 在 [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) 的基础上进一步扩大参数量到 130 亿,并且在高质量的语料上训练了 1.4 万亿 tokens,超过 LLaMA-13B 40%,是当前开源 13B 尺寸下训练数据量最多的模型。支持中英双语,使用 ALiBi 位置编码,上下文窗口长度为 4096。 2. **同时开源预训练和对齐模型**:预训练模型是适用开发者的“基座”,而广大普通用户对有对话功能的对齐模型具有更强的需求。因此本次开源我们同时发布了对齐模型(Baichuan-13B-Chat),具有很强的对话能力,开箱即用,几行代码即可简单的部署。 3. **更高效的推理**:为了支持更广大用户的使用,我们本次同时开源了 int8 和 int4 的量化版本,相对非量化版本在几乎没有效果损失的情况下大大降低了部署的机器资源门槛,可以部署在如 Nvidia 3090 这样的消费级显卡上。 4. **开源免费可商用**:Baichuan-13B 不仅对学术研究完全开放,开发者也仅需邮件申请并获得官方商用许可后,即可以免费商用。 5. Baichuan-13B-Base is the pre-training version in the Baichuan-13B series of models, and the aligned model can be found at [Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat). [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) is an open-source, commercially usable large-scale language model developed by Baichuan Intelligence, following [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B). With 13 billion parameters, it achieves the best performance in standard Chinese and English benchmarks among models of its size. This release includes two versions: pre-training (Baichuan-13B-Base) and alignment (Baichuan-13B-Chat). Baichuan-13B has the following features: 1. **Larger size, more data**: Baichuan-13B further expands the parameter volume to 13 billion based on [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B), and has trained 1.4 trillion tokens on high-quality corpora, exceeding LLaMA-13B by 40%. It is currently the model with the most training data in the open-source 13B size. It supports both Chinese and English, uses ALiBi position encoding, and has a context window length of 4096. 2. **Open-source pre-training and alignment models simultaneously**: The pre-training model is a "base" suitable for developers, while the general public has a stronger demand for alignment models with dialogue capabilities. Therefore, in this open-source release, we also released the alignment model (Baichuan-13B-Chat), which has strong dialogue capabilities and is ready to use. It can be easily deployed with just a few lines of code. 3. **More efficient inference**: To support a wider range of users, we have open-sourced the INT8 and INT4 quantized versions. The model can be conveniently deployed on consumer GPUs like the Nvidia 3090 with almost no performance loss. 4. **Open-source, free, and commercially usable**: Baichuan-13B is not only fully open to academic research, but developers can also use it for free commercially after applying for and receiving official commercial permission via email. ## 模型详情 ### 模型描述 <!-- Provide a longer summary of what this model is. --> - **Developed by:** 百川智能(Baichuan Intelligent Technology) - **Email**: [email protected] - **Language(s) (NLP):** Chinese/English - **License:** 【Community License for Baichuan-13B Model】([ZH](Baichuan-13B%20%E6%A8%A1%E5%9E%8B%E5%95%86%E7%94%A8%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)| [EN](Community%20License%20for%20Baichuan-13B%20Model.pdf)) **商业用途还需遵循(For commercial use additional):** 请通过上述Email联系申请书面授权。(Contact us via Email above to apply for written authorization.) ### 模型结构 <!-- Provide the basic links for the model. --> 整体模型基于Baichuan-7B,为了获得更好的推理性能,Baichuan-13B 使用了 ALiBi 线性偏置技术,相对于 Rotary Embedding 计算量更小,对推理性能有显著提升;与标准的 LLaMA-13B 相比,生成 2000 个 tokens 的平均推理速度 (tokens/s),实测提升 31.6%: | Model | tokens/s | |-------------|----------| | LLaMA-13B | 19.4 | | Baichuan-13B| 25.4 | 具体参数和见下表 | 模型名称 | 隐含层维度 | 层数 | 头数 |词表大小 | 总参数量 | 训练数据(tokens) | 位置编码 | 最大长度 | |-------------------------|-------|------------|------------|-----------------|--------|--------|----------------|---------| | Baichuan-7B | 4,096 | 32 | 32 | 64,000 | 7,000,559,616 | 1.2万亿 | [RoPE](https://arxiv.org/abs/2104.09864) | 4,096 | | Baichuan-13B | 5,120 | 40 | 40 | 64,000 | 13,264,901,120 | 1.4万亿 | [ALiBi](https://arxiv.org/abs/2108.12409) | 4,096 The overall model is based on Baichuan-7B. In order to achieve better inference performance, Baichuan-13B uses ALiBi linear bias technology, which has a smaller computational load compared to Rotary Embedding, and significantly improves inference performance. Compared with the standard LLaMA-13B, the average inference speed (tokens/s) for generating 2000 tokens has been tested to increase by 31.6%: | Model | tokens/s | |-------------|----------| | LLaMA-13B | 19.4 | | Baichuan-13B| 25.4 | The specific parameters are as follows: | Model Name | Hidden Size | Num Layers | Num Attention Heads |Vocab Size | Total Params | Training Dats(tokens) | Position Embedding | Max Length | |-------------------------|-------|------------|------------|-----------------|--------|--------|----------------|---------| | Baichuan-7B | 4,096 | 32 | 32 | 64,000 | 7,000,559,616 | 1.2万亿 | [RoPE](https://arxiv.org/abs/2104.09864) | 4,096 | | Baichuan-13B | 5,120 | 40 | 40 | 64,000 | 13,264,901,120 | 1.4万亿 | [ALiBi](https://arxiv.org/abs/2108.12409) | 4,096 ### 免责声明 我们在此声明,我们的开发团队并未基于 Baichuan-13B 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用 Baichuan-13B 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan-13B 模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。 我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用 Baichuan-13B 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 We hereby declare that our development team has not developed any applications based on the Baichuan-13B model, whether on iOS, Android, the web, or any other platform. We strongly urge all users not to use the Baichuan-13B model for any activities that harm national social security or are illegal. In addition, we also ask users not to use the Baichuan-13B model for internet services that have not undergone appropriate security review and filing. We hope that all users will adhere to this principle to ensure that technological development takes place in a regulated and legal environment. We have done our utmost to ensure the compliance of the data used in the model training process. However, despite our great efforts, due to the complexity of the model and data, there may still be some unforeseen issues. Therefore, we will not take any responsibility for any issues arising from the use of the Baichuan-13B open-source model, including but not limited to data security issues, public opinion risks, or any risks and problems arising from the model being misled, misused, disseminated, or improperly exploited. ## 训练详情 训练具体设置参见[Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B)。 For specific training settings, please refer to [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B). ## 测评结果 ### [C-Eval](https://cevalbenchmark.com/index.html#home) | Model 5-shot | STEM | Social Sciences | Humanities | Others | Average | |-------------------------|:-----:|:---------------:|:----------:|:------:|:-------:| | Baichuan-7B | 38.2 | 52.0 | 46.2 | 39.3 | 42.8 | | Chinese-Alpaca-Plus-13B | 35.2 | 45.6 | 40.0 | 38.2 | 38.8 | | Chinese-LLaMA-Plus-13B | 30.3 | 38.0 | 32.9 | 29.1 | 32.1 | | Ziya-LLaMA-13B-Pretrain | 27.6 | 34.4 | 32.0 | 28.6 | 30.0 | | LLaMA-13B | 27.0 | 33.6 | 27.7 | 27.6 | 28.5 | | moss-moon-003-base (16B)| 27.0 | 29.1 | 27.2 | 26.9 | 27.4 | | vicuna-13B | 22.8 | 24.8 | 22.3 | 18.5 | 22.2 | | **Baichuan-13B-Base** | **45.9** | **63.5** | **57.2** | **49.3** | **52.4** | | **Baichuan-13B-Chat** | **43.7** | **64.6** | **56.2** | **49.2** | **51.5** | ### [MMLU](https://arxiv.org/abs/2009.03300) | Model 5-shot | STEM | Social Sciences | Humanities | Others | Average | |-------------------------|:-----:|:---------------:|:----------:|:------:|:-------:| | LLaMA-13B | 36.1 | 53.0 | 44.0 | 52.8 | 46.3 | | Chinese-Alpaca-Plus-13B | 36.9 | 48.9 | 40.5 | 50.5 | 43.9 | | Ziya-LLaMA-13B-Pretrain | 35.6 | 47.6 | 40.1 | 49.4 | 42.9 | | Baichuan-7B | 35.6 | 48.9 | 38.4 | 48.1 | 42.3 | | Chinese-LLaMA-Plus-13B | 33.1 | 42.8 | 37.0 | 44.6 | 39.2 | | vicuna-13B | 24.2 | 24.1 | 24.6 | 26.8 | 24.9 | | moss-moon-003-base (16B)| 22.4 | 22.8 | 24.2 | 24.4 | 23.6 | | **Baichuan-13B-Base** | **41.6** | **60.9** | **47.4** | **58.5** | **51.6** | | **Baichuan-13B-Chat** | **40.9** | **60.9** | **48.8** | **59.0** | **52.1** | > 说明:我们采用了 MMLU 官方的[评测方案](https://github.com/hendrycks/test)。 ### [CMMLU](https://github.com/haonan-li/CMMLU) | Model 5-shot | STEM | Humanities | Social Sciences | Others | China Specific | Average | |-------------------------|:-----:|:----------:|:---------------:|:------:|:--------------:|:-------:| | Baichuan-7B | 34.4 | 47.5 | 47.6 | 46.6 | 44.3 | 44.0 | | Chinese-Alpaca-Plus-13B | 29.8 | 33.4 | 33.2 | 37.9 | 32.1 | 33.4 | | Chinese-LLaMA-Plus-13B | 28.1 | 33.1 | 35.4 | 35.1 | 33.5 | 33.0 | | Ziya-LLaMA-13B-Pretrain | 29.0 | 30.7 | 33.8 | 34.4 | 31.9 | 32.1 | | LLaMA-13B | 29.2 | 30.8 | 31.6 | 33.0 | 30.5 | 31.2 | | moss-moon-003-base (16B)| 27.2 | 30.4 | 28.8 | 32.6 | 28.7 | 29.6 | | vicuna-13B | 24.0 | 25.4 | 25.3 | 25.0 | 25.0 | 24.9 | | **Baichuan-13B-Base** | **41.7** | **61.1** | **59.8** | **59.0** | **56.4** | **55.3** | | **Baichuan-13B-Chat** | **42.8** | **62.6** | **59.7** | **59.0** | **56.1** | **55.8** | > 说明:CMMLU 是一个综合性的中文评估基准,专门用于评估语言模型在中文语境下的知识和推理能力。我们采用了其官方的[评测方案](https://github.com/haonan-li/CMMLU)。 ## 微信群组 ![WeChat](https://github.com/baichuan-inc/Baichuan-13B/blob/main/media/wechat.jpeg?raw=true)
RavenFangsk/chronoborous-33B-GPTQ
RavenFangsk
2023-07-11T02:28:20Z
5
0
transformers
[ "transformers", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-10T03:26:46Z
Auto-GPTQ'd version of https://huggingface.co/Henk717/chronoboros-33B
pokorpohon/Fotoangel
pokorpohon
2023-07-11T02:26:03Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-11T02:18:29Z
--- license: creativeml-openrail-m ---
zwtharry/PPO-rocket
zwtharry
2023-07-11T02:09:34Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-11T02:09:13Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 234.64 +/- 40.86 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
manhtt-079/vipubmed-deberta-base
manhtt-079
2023-07-11T01:59:35Z
7
0
transformers
[ "transformers", "pytorch", "tf", "deberta-v2", "transformer", "vietnamese", "nlp", "bert", "deberta", "fill-mask", "vi", "dataset:VietAI/vi_pubmed", "license:mit", "endpoints_compatible", "region:us" ]
fill-mask
2023-05-06T10:35:36Z
--- language: - vi metrics: - f1 pipeline_tag: fill-mask license: mit datasets: - VietAI/vi_pubmed tags: - transformer - vietnamese - nlp - bert - deberta - deberta-v2 --- # ViPubMedDeBERTa: A Vietnamese pretrained biomedical language representation model ## Model description ## Model variations - `vipubmed-deberta-xsmall`: 22M backbone parameters - `vipubmed-deberta-base`: 86M backbone parameters ## How to use You can use this model directly with a pipeline for masked language modeling:<br> **_NOTE:_** The input text should be already word-segmented, you can use [Pyvi](https://github.com/trungtv/pyvi) (Python Vietnamese Core NLP Toolkit) to segment word before passing to the model. ```python >>> from transformers import pipeline >>> model = pipeline('fill-mask', model='manhtt-079/vipubmed-deberta-base') >>> text_with_mask = """Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ) . FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm . Phẫu_thuật được coi là phương_thức điều_trị tốt nhất , tiếp_theo là hóa_trị . Trong trường_hợp của chúng_tôi , [MASK] cắt bỏ không_thể thực_hiện được , do đó bệnh_nhân được hóa_trị hai dòng , sau đó là cấy_ghép tủy xương , sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên""" >>> model(text_with_mask) [{'score': 0.8480948805809021, 'token': 1621, 'token_str': 'phẫu_thuật', 'sequence': 'Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ). FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm. Phẫu_thuật được coi là phương_thức điều_trị tốt nhất, tiếp_theo là hóa_trị. Trong trường_hợp của chúng_tôi, phẫu_thuật cắt bỏ không_thể thực_hiện được, do đó bệnh_nhân được hóa_trị hai dòng, sau đó là cấy_ghép tủy xương, sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên'}, {'score': 0.1136574074625969, 'token': 83, 'token_str': 'việc', 'sequence': 'Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ). FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm. Phẫu_thuật được coi là phương_thức điều_trị tốt nhất, tiếp_theo là hóa_trị. Trong trường_hợp của chúng_tôi, việc cắt bỏ không_thể thực_hiện được, do đó bệnh_nhân được hóa_trị hai dòng, sau đó là cấy_ghép tủy xương, sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên'}, {'score': 0.014141257852315903, 'token': 589, 'token_str': 'phương_pháp', 'sequence': 'Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ). FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm. Phẫu_thuật được coi là phương_thức điều_trị tốt nhất, tiếp_theo là hóa_trị. Trong trường_hợp của chúng_tôi, phương_pháp cắt bỏ không_thể thực_hiện được, do đó bệnh_nhân được hóa_trị hai dòng, sau đó là cấy_ghép tủy xương, sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên'}, {'score': 0.0024715897161513567, 'token': 454, 'token_str': 'điều_trị', 'sequence': 'Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ). FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm. Phẫu_thuật được coi là phương_thức điều_trị tốt nhất, tiếp_theo là hóa_trị. Trong trường_hợp của chúng_tôi, điều_trị cắt bỏ không_thể thực_hiện được, do đó bệnh_nhân được hóa_trị hai dòng, sau đó là cấy_ghép tủy xương, sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên'}, {'score': 0.002370780799537897, 'token': 485, 'token_str': 'quá_trình', 'sequence': 'Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ). FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm. Phẫu_thuật được coi là phương_thức điều_trị tốt nhất, tiếp_theo là hóa_trị. Trong trường_hợp của chúng_tôi, quá_trình cắt bỏ không_thể thực_hiện được, do đó bệnh_nhân được hóa_trị hai dòng, sau đó là cấy_ghép tủy xương, sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên'}] ``` #### Get features: - With PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('manhtt-079/vipubmed-deberta-base') model = AutoModel.from_pretrained("manhtt-079/vipubmed-deberta-base") text = "Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS )." model_inputs = tokenizer(text, return_tensors='pt') outputs = model(**model_inputs) ``` - With TensorFlow ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('manhtt-079/vipubmed-deberta-base') model = TFAutoModel.from_pretrained("manhtt-079/vipubmed-deberta-base") text = "Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS )." model_inputs = tokenizer(text, return_tensors='tf') outputs = model(**model_inputs) ``` ## Pre-training data The ViPubMedDeBERTa model was pre-trained on [ViPubmed](https://github.com/vietai/ViPubmed), a dataset consisting of 20M Vietnamese Biomedical abstracts generated by large scale translation. ## Training procedure ### Data deduplication A fuzzy deduplication, targeting documents with high overlap, was conducted at the document level to enhance quality and address overfitting. Employing Locality Sensitive Hashing (LSH) with a threshold of 0.9 ensured the removal of documents with overlap exceeding 90%. This process resulted in an average reduction of the dataset's size by 3%. ### Pretraining We employ our model based on the [ViDeBERTa](https://github.com/HySonLab/ViDeBERTa) architecture and leverage its pre-trained checkpoint to continue pre-training. Our model was trained on a single A100 GPU (40GB) for 350 thousand steps, with a batch size of 16 and gradient accumulation steps set to 4 (resulting in a total of 64). The sequence length was limited to 512 tokens and the model peak learning rate of 1e-4. ## Evaluation results
g-rocha1904/distilbert-base-uncased-finetuned-cola
g-rocha1904
2023-07-11T01:47:10Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-11T00:47:03Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: g-rocha1904/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # g-rocha1904/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1889 - Validation Loss: 0.5522 - Train Matthews Correlation: 0.5354 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5249 | 0.4571 | 0.4933 | 0 | | 0.3197 | 0.4577 | 0.5301 | 1 | | 0.1889 | 0.5522 | 0.5354 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
simonestradasch/nercomlower-bert-base-spanish-wwm-cased
simonestradasch
2023-07-11T01:40:59Z
118
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "es", "dataset:simonestradasch/NERcomp2lower", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-11T01:30:38Z
--- language: - es tags: - generated_from_trainer datasets: - simonestradasch/NERcomp2lower model-index: - name: nercomlower-bert-base-spanish-wwm-cased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nercomlower-bert-base-spanish-wwm-cased This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the simonestradasch/NERcomp2lower dataset. It achieves the following results on the evaluation set: - Loss: 0.2448 - Body Part Precision: 0.7140 - Body Part Recall: 0.7676 - Body Part F1: 0.7398 - Body Part Number: 413 - Disease Precision: 0.7505 - Disease Recall: 0.7805 - Disease F1: 0.7652 - Disease Number: 975 - Family Member Precision: 0.875 - Family Member Recall: 0.9333 - Family Member F1: 0.9032 - Family Member Number: 30 - Medication Precision: 0.8764 - Medication Recall: 0.8387 - Medication F1: 0.8571 - Medication Number: 93 - Procedure Precision: 0.6571 - Procedure Recall: 0.6656 - Procedure F1: 0.6613 - Procedure Number: 311 - Overall Precision: 0.7344 - Overall Recall: 0.7634 - Overall F1: 0.7487 - Overall Accuracy: 0.9277 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 13 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.355 | 1.0 | 1004 | 0.2520 | 0.7073 | 0.8015 | 0.7514 | 413 | 0.7485 | 0.7477 | 0.7481 | 975 | 0.8710 | 0.9 | 0.8852 | 30 | 0.7196 | 0.8280 | 0.77 | 93 | 0.5804 | 0.6270 | 0.6028 | 311 | 0.7093 | 0.7459 | 0.7271 | 0.9219 | | 0.1869 | 2.0 | 2008 | 0.2448 | 0.7140 | 0.7676 | 0.7398 | 413 | 0.7505 | 0.7805 | 0.7652 | 975 | 0.875 | 0.9333 | 0.9032 | 30 | 0.8764 | 0.8387 | 0.8571 | 93 | 0.6571 | 0.6656 | 0.6613 | 311 | 0.7344 | 0.7634 | 0.7487 | 0.9277 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Jonathaniu/alpaca-bitcoin-tweets-sentiment-13b
Jonathaniu
2023-07-11T01:35:04Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-10T03:01:20Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
casque/TemplarAssassinv0.2
casque
2023-07-11T01:29:41Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-11T01:26:51Z
--- license: creativeml-openrail-m ---
liyingjian/Reinforce-policy-gradient
liyingjian
2023-07-11T01:28:57Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-11T01:28:48Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-policy-gradient results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 403.00 +/- 194.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
bastianchinchon/nominal-groups-recognition-roberta-clinical-wl-es
bastianchinchon
2023-07-11T01:28:34Z
120
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "token-classification", "generated_from_trainer", "es", "dataset:bastianchinchon/spanish_nominal_groups_conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-11T01:00:00Z
--- language: - es license: apache-2.0 tags: - generated_from_trainer datasets: - bastianchinchon/spanish_nominal_groups_conll2003 model-index: - name: nominal-groups-recognition-roberta-clinical-wl-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nominal-groups-recognition-roberta-clinical-wl-es This model is a fine-tuned version of [plncmm/roberta-clinical-wl-es](https://huggingface.co/plncmm/roberta-clinical-wl-es) on the bastianchinchon/spanish_nominal_groups_conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.2226 - Body Part Precision: 0.7427 - Body Part Recall: 0.7966 - Body Part F1: 0.7687 - Body Part Number: 413 - Disease Precision: 0.7915 - Disease Recall: 0.8174 - Disease F1: 0.8042 - Disease Number: 975 - Family Member Precision: 0.8286 - Family Member Recall: 0.9667 - Family Member F1: 0.8923 - Family Member Number: 30 - Medication Precision: 0.7905 - Medication Recall: 0.8925 - Medication F1: 0.8384 - Medication Number: 93 - Procedure Precision: 0.7105 - Procedure Recall: 0.7814 - Procedure F1: 0.7443 - Procedure Number: 311 - Overall Precision: 0.7666 - Overall Recall: 0.8128 - Overall F1: 0.7890 - Overall Accuracy: 0.9374 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 13 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.356 | 1.0 | 1004 | 0.2241 | 0.7283 | 0.7724 | 0.7497 | 413 | 0.7603 | 0.8133 | 0.7859 | 975 | 0.9062 | 0.9667 | 0.9355 | 30 | 0.7547 | 0.8602 | 0.8040 | 93 | 0.6464 | 0.7524 | 0.6954 | 311 | 0.7345 | 0.7986 | 0.7652 | 0.9319 | | 0.1823 | 2.0 | 2008 | 0.2226 | 0.7427 | 0.7966 | 0.7687 | 413 | 0.7915 | 0.8174 | 0.8042 | 975 | 0.8286 | 0.9667 | 0.8923 | 30 | 0.7905 | 0.8925 | 0.8384 | 93 | 0.7105 | 0.7814 | 0.7443 | 311 | 0.7666 | 0.8128 | 0.7890 | 0.9374 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
AndreNasci/distilbert-base-uncased-finetuned-cola
AndreNasci
2023-07-11T01:24:44Z
62
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-10T23:58:13Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: AndreNasci/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # AndreNasci/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1932 - Validation Loss: 0.5147 - Train Matthews Correlation: 0.5469 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5120 | 0.4538 | 0.4858 | 0 | | 0.3206 | 0.4722 | 0.5116 | 1 | | 0.1932 | 0.5147 | 0.5469 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
casque/VengefulSpiritv0.1
casque
2023-07-11T01:20:00Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-11T01:17:11Z
--- license: creativeml-openrail-m ---
yecao/text
yecao
2023-07-11T01:14:06Z
0
0
allennlp
[ "allennlp", "glm", "chatglm", "thudm", "text-classification", "zh", "en", "dataset:cerebras/SlimPajama-627B", "arxiv:2103.10360", "arxiv:2210.02414", "arxiv:1911.02150", "license:bigscience-bloom-rail-1.0", "region:us" ]
text-classification
2023-07-05T08:10:19Z
--- language: - zh - en tags: - glm - chatglm - thudm license: bigscience-bloom-rail-1.0 datasets: - cerebras/SlimPajama-627B metrics: - bertscore library_name: allennlp pipeline_tag: text-classification --- # ChatGLM2-6B <p align="center"> 💻 <a href="https://github.com/THUDM/ChatGLM2-6B" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/thukeg" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2103.10360" target="_blank">[GLM@ACL 22]</a> <a href="https://github.com/THUDM/GLM" target="_blank">[GitHub]</a> • 📃 <a href="https://arxiv.org/abs/2210.02414" target="_blank">[GLM-130B@ICLR 23]</a> <a href="https://github.com/THUDM/GLM-130B" target="_blank">[GitHub]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://join.slack.com/t/chatglm/shared_invite/zt-1y7pqoloy-9b1g6T6JjA8J0KxvUjbwJw" target="_blank">Slack</a> and <a href="https://github.com/THUDM/ChatGLM-6B/blob/main/resources/WECHAT.md" target="_blank">WeChat</a> </p> ## 介绍 ChatGLM**2**-6B 是开源中英双语对话模型 [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) 的第二代版本,在保留了初代模型对话流畅、部署门槛较低等众多优秀特性的基础之上,ChatGLM**2**-6B 引入了如下新特性: 1. **更强大的性能**:基于 ChatGLM 初代模型的开发经验,我们全面升级了 ChatGLM2-6B 的基座模型。ChatGLM2-6B 使用了 [GLM](https://github.com/THUDM/GLM) 的混合目标函数,经过了 1.4T 中英标识符的预训练与人类偏好对齐训练,[评测结果](#评测结果)显示,相比于初代模型,ChatGLM2-6B 在 MMLU(+23%)、CEval(+33%)、GSM8K(+571%) 、BBH(+60%)等数据集上的性能取得了大幅度的提升,在同尺寸开源模型中具有较强的竞争力。 2. **更长的上下文**:基于 [FlashAttention](https://github.com/HazyResearch/flash-attention) 技术,我们将基座模型的上下文长度(Context Length)由 ChatGLM-6B 的 2K 扩展到了 32K,并在对话阶段使用 8K 的上下文长度训练,允许更多轮次的对话。但当前版本的 ChatGLM2-6B 对单轮超长文档的理解能力有限,我们会在后续迭代升级中着重进行优化。 3. **更高效的推理**:基于 [Multi-Query Attention](http://arxiv.org/abs/1911.02150) 技术,ChatGLM2-6B 有更高效的推理速度和更低的显存占用:在官方的模型实现下,推理速度相比初代提升了 42%,INT4 量化下,6G 显存支持的对话长度由 1K 提升到了 8K。 ChatGLM**2**-6B is the second-generation version of the open-source bilingual (Chinese-English) chat model [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B). It retains the smooth conversation flow and low deployment threshold of the first-generation model, while introducing the following new features: 1. **Stronger Performance**: Based on the development experience of the first-generation ChatGLM model, we have fully upgraded the base model of ChatGLM2-6B. ChatGLM2-6B uses the hybrid objective function of [GLM](https://github.com/THUDM/GLM), and has undergone pre-training with 1.4T bilingual tokens and human preference alignment training. The [evaluation results](README.md#evaluation-results) show that, compared to the first-generation model, ChatGLM2-6B has achieved substantial improvements in performance on datasets like MMLU (+23%), CEval (+33%), GSM8K (+571%), BBH (+60%), showing strong competitiveness among models of the same size. 2. **Longer Context**: Based on [FlashAttention](https://github.com/HazyResearch/flash-attention) technique, we have extended the context length of the base model from 2K in ChatGLM-6B to 32K, and trained with a context length of 8K during the dialogue alignment, allowing for more rounds of dialogue. However, the current version of ChatGLM2-6B has limited understanding of single-round ultra-long documents, which we will focus on optimizing in future iterations. 3. **More Efficient Inference**: Based on [Multi-Query Attention](http://arxiv.org/abs/1911.02150) technique, ChatGLM2-6B has more efficient inference speed and lower GPU memory usage: under the official implementation, the inference speed has increased by 42% compared to the first generation; under INT4 quantization, the dialogue length supported by 6G GPU memory has increased from 1K to 8K. ## 软件依赖 ```shell pip install protobuf transformers==4.30.2 cpm_kernels torch>=2.0 gradio mdtex2html sentencepiece accelerate ``` ## 代码调用 可以通过如下代码调用 ChatGLM-6B 模型来生成对话: ```ipython >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True) >>> model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).half().cuda() >>> model = model.eval() >>> response, history = model.chat(tokenizer, "你好", history=[]) >>> print(response) 你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。 >>> response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history) >>> print(response) 晚上睡不着可能会让你感到焦虑或不舒服,但以下是一些可以帮助你入睡的方法: 1. 制定规律的睡眠时间表:保持规律的睡眠时间表可以帮助你建立健康的睡眠习惯,使你更容易入睡。尽量在每天的相同时间上床,并在同一时间起床。 2. 创造一个舒适的睡眠环境:确保睡眠环境舒适,安静,黑暗且温度适宜。可以使用舒适的床上用品,并保持房间通风。 3. 放松身心:在睡前做些放松的活动,例如泡个热水澡,听些轻柔的音乐,阅读一些有趣的书籍等,有助于缓解紧张和焦虑,使你更容易入睡。 4. 避免饮用含有咖啡因的饮料:咖啡因是一种刺激性物质,会影响你的睡眠质量。尽量避免在睡前饮用含有咖啡因的饮料,例如咖啡,茶和可乐。 5. 避免在床上做与睡眠无关的事情:在床上做些与睡眠无关的事情,例如看电影,玩游戏或工作等,可能会干扰你的睡眠。 6. 尝试呼吸技巧:深呼吸是一种放松技巧,可以帮助你缓解紧张和焦虑,使你更容易入睡。试着慢慢吸气,保持几秒钟,然后缓慢呼气。 如果这些方法无法帮助你入睡,你可以考虑咨询医生或睡眠专家,寻求进一步的建议。 ``` 关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考我们的 [Github Repo](https://github.com/THUDM/ChatGLM2-6B)。 For more instructions, including how to run CLI and web demos, and model quantization, please refer to our [Github Repo](https://github.com/THUDM/ChatGLM2-6B). ## Change Log * v1.0 ## 协议 本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM2-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。 ## 引用 如果你觉得我们的工作有帮助的话,请考虑引用下列论文,ChatGLM2-6B 的论文会在近期公布,尽情期待~ ``` @article{zeng2022glm, title={Glm-130b: An open bilingual pre-trained model}, author={Zeng, Aohan and Liu, Xiao and Du, Zhengxiao and Wang, Zihan and Lai, Hanyu and Ding, Ming and Yang, Zhuoyi and Xu, Yifan and Zheng, Wendi and Xia, Xiao and others}, journal={arXiv preprint arXiv:2210.02414}, year={2022} } ``` ``` @inproceedings{du2022glm, title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling}, author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie}, booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, pages={320--335}, year={2022} } ```
lucs1265/distilbert-base-uncased-finetuned-cola
lucs1265
2023-07-11T01:11:57Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-11T01:06:54Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: lucs1265/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # lucs1265/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1898 - Validation Loss: 0.5233 - Train Matthews Correlation: 0.5286 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5194 | 0.4536 | 0.4725 | 0 | | 0.3249 | 0.4763 | 0.4867 | 1 | | 0.1898 | 0.5233 | 0.5286 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
casque/Windrunnerv0.2
casque
2023-07-11T01:03:37Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-11T01:00:46Z
--- license: creativeml-openrail-m ---
VitCon/q-FrozenLake-v1-4x4-noSlippery
VitCon
2023-07-11T01:01:55Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-11T01:01:53Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="VitCon/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jordyvl/vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.7
jordyvl
2023-07-11T01:00:28Z
163
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-10T23:46:12Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.7 This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2544 - Accuracy: 0.6375 - Brier Loss: 0.4805 - Nll: 3.0517 - F1 Micro: 0.6375 - F1 Macro: 0.6394 - Ece: 0.1654 - Aurc: 0.1376 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 25 | 3.2176 | 0.1275 | 0.9297 | 15.5568 | 0.1275 | 0.1255 | 0.1544 | 0.8595 | | No log | 2.0 | 50 | 2.4392 | 0.405 | 0.7503 | 9.6083 | 0.405 | 0.3723 | 0.1816 | 0.3640 | | No log | 3.0 | 75 | 1.9211 | 0.5025 | 0.6287 | 5.6023 | 0.5025 | 0.4930 | 0.1991 | 0.2451 | | No log | 4.0 | 100 | 1.7474 | 0.5375 | 0.5956 | 4.5712 | 0.5375 | 0.5387 | 0.1677 | 0.2244 | | No log | 5.0 | 125 | 1.7107 | 0.535 | 0.6051 | 4.3431 | 0.535 | 0.5180 | 0.1796 | 0.2269 | | No log | 6.0 | 150 | 1.7144 | 0.545 | 0.5988 | 3.6699 | 0.545 | 0.5455 | 0.1918 | 0.2253 | | No log | 7.0 | 175 | 1.9096 | 0.5625 | 0.6262 | 4.6856 | 0.5625 | 0.5459 | 0.1966 | 0.2362 | | No log | 8.0 | 200 | 1.6325 | 0.575 | 0.5815 | 3.9279 | 0.575 | 0.5705 | 0.1893 | 0.2026 | | No log | 9.0 | 225 | 1.8268 | 0.56 | 0.6088 | 4.5140 | 0.56 | 0.5482 | 0.1976 | 0.2213 | | No log | 10.0 | 250 | 1.9253 | 0.5575 | 0.6493 | 4.2860 | 0.5575 | 0.5427 | 0.2286 | 0.2445 | | No log | 11.0 | 275 | 1.6941 | 0.5725 | 0.5940 | 3.9317 | 0.5725 | 0.5827 | 0.2019 | 0.2232 | | No log | 12.0 | 300 | 1.8197 | 0.5575 | 0.6138 | 4.7928 | 0.5575 | 0.5476 | 0.2079 | 0.2240 | | No log | 13.0 | 325 | 1.8958 | 0.54 | 0.6508 | 4.2978 | 0.54 | 0.5338 | 0.2379 | 0.2357 | | No log | 14.0 | 350 | 1.8939 | 0.535 | 0.6522 | 4.5557 | 0.535 | 0.5143 | 0.2324 | 0.2350 | | No log | 15.0 | 375 | 1.8018 | 0.585 | 0.6042 | 4.4728 | 0.585 | 0.5829 | 0.2205 | 0.2182 | | No log | 16.0 | 400 | 1.7645 | 0.5975 | 0.5978 | 3.9939 | 0.5975 | 0.5992 | 0.2130 | 0.1927 | | No log | 17.0 | 425 | 1.6392 | 0.5925 | 0.5842 | 3.6783 | 0.5925 | 0.6039 | 0.1986 | 0.2017 | | No log | 18.0 | 450 | 1.6124 | 0.5875 | 0.5761 | 4.0535 | 0.5875 | 0.5721 | 0.2060 | 0.1792 | | No log | 19.0 | 475 | 1.7517 | 0.585 | 0.6102 | 3.9076 | 0.585 | 0.5786 | 0.2082 | 0.2071 | | 0.6436 | 20.0 | 500 | 1.7467 | 0.5575 | 0.6166 | 3.5052 | 0.5575 | 0.5476 | 0.2252 | 0.2247 | | 0.6436 | 21.0 | 525 | 1.6719 | 0.5825 | 0.5745 | 4.1235 | 0.5825 | 0.5877 | 0.1831 | 0.1723 | | 0.6436 | 22.0 | 550 | 1.4222 | 0.605 | 0.5237 | 3.2051 | 0.605 | 0.6083 | 0.1813 | 0.1559 | | 0.6436 | 23.0 | 575 | 1.6436 | 0.595 | 0.5701 | 4.3949 | 0.595 | 0.5834 | 0.1921 | 0.1901 | | 0.6436 | 24.0 | 600 | 1.4244 | 0.6075 | 0.5197 | 3.3207 | 0.6075 | 0.6100 | 0.1548 | 0.1616 | | 0.6436 | 25.0 | 625 | 1.4567 | 0.6075 | 0.5356 | 3.5288 | 0.6075 | 0.6107 | 0.1768 | 0.1652 | | 0.6436 | 26.0 | 650 | 1.5889 | 0.595 | 0.5587 | 4.1521 | 0.595 | 0.5907 | 0.1943 | 0.1768 | | 0.6436 | 27.0 | 675 | 1.4828 | 0.5725 | 0.5532 | 3.4259 | 0.5725 | 0.5720 | 0.2125 | 0.1803 | | 0.6436 | 28.0 | 700 | 1.4671 | 0.5975 | 0.5509 | 3.2612 | 0.5975 | 0.6006 | 0.1983 | 0.1797 | | 0.6436 | 29.0 | 725 | 1.4049 | 0.6225 | 0.5273 | 3.3136 | 0.6225 | 0.6237 | 0.1995 | 0.1600 | | 0.6436 | 30.0 | 750 | 1.4039 | 0.6175 | 0.5208 | 3.2588 | 0.6175 | 0.6063 | 0.1770 | 0.1534 | | 0.6436 | 31.0 | 775 | 1.4333 | 0.6 | 0.5378 | 3.6417 | 0.6 | 0.5995 | 0.1899 | 0.1632 | | 0.6436 | 32.0 | 800 | 1.3311 | 0.64 | 0.5032 | 3.0056 | 0.64 | 0.6394 | 0.1699 | 0.1476 | | 0.6436 | 33.0 | 825 | 1.3361 | 0.61 | 0.5079 | 3.2304 | 0.61 | 0.6123 | 0.1536 | 0.1517 | | 0.6436 | 34.0 | 850 | 1.2984 | 0.64 | 0.4982 | 3.1446 | 0.64 | 0.6444 | 0.1636 | 0.1424 | | 0.6436 | 35.0 | 875 | 1.3153 | 0.6275 | 0.4995 | 3.0722 | 0.6275 | 0.6288 | 0.1634 | 0.1486 | | 0.6436 | 36.0 | 900 | 1.2773 | 0.6375 | 0.4880 | 2.7136 | 0.6375 | 0.6422 | 0.1606 | 0.1411 | | 0.6436 | 37.0 | 925 | 1.2881 | 0.64 | 0.4946 | 3.0452 | 0.64 | 0.6437 | 0.1732 | 0.1440 | | 0.6436 | 38.0 | 950 | 1.2609 | 0.64 | 0.4824 | 2.7407 | 0.64 | 0.6430 | 0.1485 | 0.1424 | | 0.6436 | 39.0 | 975 | 1.2685 | 0.645 | 0.4869 | 2.7203 | 0.645 | 0.6484 | 0.1680 | 0.1398 | | 0.0861 | 40.0 | 1000 | 1.2546 | 0.635 | 0.4808 | 2.7042 | 0.635 | 0.6356 | 0.1669 | 0.1416 | | 0.0861 | 41.0 | 1025 | 1.2599 | 0.6425 | 0.4858 | 2.6880 | 0.6425 | 0.6457 | 0.1539 | 0.1387 | | 0.0861 | 42.0 | 1050 | 1.2413 | 0.635 | 0.4783 | 2.8343 | 0.635 | 0.6361 | 0.1679 | 0.1369 | | 0.0861 | 43.0 | 1075 | 1.2670 | 0.6325 | 0.4901 | 2.8366 | 0.6325 | 0.6337 | 0.1501 | 0.1399 | | 0.0861 | 44.0 | 1100 | 1.2793 | 0.63 | 0.4919 | 3.1711 | 0.63 | 0.6309 | 0.1672 | 0.1465 | | 0.0861 | 45.0 | 1125 | 1.2555 | 0.635 | 0.4844 | 2.9284 | 0.635 | 0.6379 | 0.1791 | 0.1401 | | 0.0861 | 46.0 | 1150 | 1.2491 | 0.635 | 0.4806 | 2.8475 | 0.635 | 0.6358 | 0.1611 | 0.1392 | | 0.0861 | 47.0 | 1175 | 1.2533 | 0.6325 | 0.4837 | 2.8229 | 0.6325 | 0.6352 | 0.1623 | 0.1378 | | 0.0861 | 48.0 | 1200 | 1.2602 | 0.635 | 0.4857 | 2.9963 | 0.635 | 0.6368 | 0.1535 | 0.1426 | | 0.0861 | 49.0 | 1225 | 1.2598 | 0.635 | 0.4848 | 2.8569 | 0.635 | 0.6370 | 0.1718 | 0.1389 | | 0.0861 | 50.0 | 1250 | 1.2577 | 0.6225 | 0.4839 | 2.8645 | 0.6225 | 0.6237 | 0.1678 | 0.1420 | | 0.0861 | 51.0 | 1275 | 1.2547 | 0.63 | 0.4817 | 2.8344 | 0.63 | 0.6314 | 0.1721 | 0.1399 | | 0.0861 | 52.0 | 1300 | 1.2525 | 0.64 | 0.4819 | 2.7720 | 0.64 | 0.6411 | 0.1567 | 0.1378 | | 0.0861 | 53.0 | 1325 | 1.2627 | 0.6325 | 0.4854 | 2.9202 | 0.6325 | 0.6337 | 0.1688 | 0.1406 | | 0.0861 | 54.0 | 1350 | 1.2565 | 0.63 | 0.4836 | 2.8392 | 0.63 | 0.6320 | 0.1612 | 0.1404 | | 0.0861 | 55.0 | 1375 | 1.2514 | 0.6325 | 0.4813 | 2.9887 | 0.6325 | 0.6343 | 0.1652 | 0.1386 | | 0.0861 | 56.0 | 1400 | 1.2541 | 0.6275 | 0.4822 | 2.9067 | 0.6275 | 0.6296 | 0.1649 | 0.1401 | | 0.0861 | 57.0 | 1425 | 1.2529 | 0.64 | 0.4810 | 2.9166 | 0.64 | 0.6432 | 0.1765 | 0.1372 | | 0.0861 | 58.0 | 1450 | 1.2464 | 0.6275 | 0.4799 | 2.9713 | 0.6275 | 0.6291 | 0.1653 | 0.1401 | | 0.0861 | 59.0 | 1475 | 1.2576 | 0.63 | 0.4826 | 2.9124 | 0.63 | 0.6323 | 0.1557 | 0.1397 | | 0.0496 | 60.0 | 1500 | 1.2494 | 0.63 | 0.4804 | 2.8355 | 0.63 | 0.6317 | 0.1672 | 0.1390 | | 0.0496 | 61.0 | 1525 | 1.2496 | 0.6325 | 0.4803 | 2.9091 | 0.6325 | 0.6352 | 0.1510 | 0.1383 | | 0.0496 | 62.0 | 1550 | 1.2592 | 0.6375 | 0.4838 | 2.8980 | 0.6375 | 0.6384 | 0.1758 | 0.1398 | | 0.0496 | 63.0 | 1575 | 1.2504 | 0.63 | 0.4806 | 2.9843 | 0.63 | 0.6316 | 0.1691 | 0.1391 | | 0.0496 | 64.0 | 1600 | 1.2528 | 0.6325 | 0.4810 | 2.9045 | 0.6325 | 0.6349 | 0.1737 | 0.1388 | | 0.0496 | 65.0 | 1625 | 1.2589 | 0.6425 | 0.4833 | 2.9817 | 0.6425 | 0.6447 | 0.1719 | 0.1380 | | 0.0496 | 66.0 | 1650 | 1.2531 | 0.63 | 0.4811 | 2.9027 | 0.63 | 0.6321 | 0.1751 | 0.1391 | | 0.0496 | 67.0 | 1675 | 1.2520 | 0.635 | 0.4808 | 2.9794 | 0.635 | 0.6379 | 0.1715 | 0.1378 | | 0.0496 | 68.0 | 1700 | 1.2543 | 0.64 | 0.4815 | 2.9771 | 0.64 | 0.6420 | 0.1562 | 0.1380 | | 0.0496 | 69.0 | 1725 | 1.2538 | 0.6325 | 0.4808 | 2.9080 | 0.6325 | 0.6345 | 0.1681 | 0.1385 | | 0.0496 | 70.0 | 1750 | 1.2543 | 0.6325 | 0.4813 | 2.9102 | 0.6325 | 0.6347 | 0.1725 | 0.1390 | | 0.0496 | 71.0 | 1775 | 1.2534 | 0.6325 | 0.4809 | 2.9778 | 0.6325 | 0.6353 | 0.1495 | 0.1385 | | 0.0496 | 72.0 | 1800 | 1.2539 | 0.6375 | 0.4809 | 2.9024 | 0.6375 | 0.6394 | 0.1588 | 0.1381 | | 0.0496 | 73.0 | 1825 | 1.2531 | 0.635 | 0.4806 | 2.9812 | 0.635 | 0.6378 | 0.1552 | 0.1380 | | 0.0496 | 74.0 | 1850 | 1.2531 | 0.635 | 0.4805 | 2.9783 | 0.635 | 0.6377 | 0.1700 | 0.1380 | | 0.0496 | 75.0 | 1875 | 1.2533 | 0.6375 | 0.4809 | 2.9772 | 0.6375 | 0.6400 | 0.1645 | 0.1372 | | 0.0496 | 76.0 | 1900 | 1.2539 | 0.6375 | 0.4808 | 2.9777 | 0.6375 | 0.6393 | 0.1675 | 0.1376 | | 0.0496 | 77.0 | 1925 | 1.2537 | 0.635 | 0.4808 | 2.9832 | 0.635 | 0.6375 | 0.1648 | 0.1381 | | 0.0496 | 78.0 | 1950 | 1.2539 | 0.6375 | 0.4807 | 2.9769 | 0.6375 | 0.6394 | 0.1636 | 0.1374 | | 0.0496 | 79.0 | 1975 | 1.2534 | 0.6375 | 0.4805 | 2.9796 | 0.6375 | 0.6399 | 0.1599 | 0.1375 | | 0.048 | 80.0 | 2000 | 1.2537 | 0.6375 | 0.4806 | 3.0539 | 0.6375 | 0.6399 | 0.1657 | 0.1375 | | 0.048 | 81.0 | 2025 | 1.2535 | 0.6375 | 0.4805 | 3.0534 | 0.6375 | 0.6399 | 0.1728 | 0.1375 | | 0.048 | 82.0 | 2050 | 1.2539 | 0.6375 | 0.4806 | 2.9831 | 0.6375 | 0.6393 | 0.1674 | 0.1375 | | 0.048 | 83.0 | 2075 | 1.2542 | 0.6375 | 0.4807 | 3.0538 | 0.6375 | 0.6399 | 0.1674 | 0.1375 | | 0.048 | 84.0 | 2100 | 1.2539 | 0.6375 | 0.4805 | 3.0531 | 0.6375 | 0.6394 | 0.1564 | 0.1375 | | 0.048 | 85.0 | 2125 | 1.2542 | 0.6375 | 0.4806 | 3.0531 | 0.6375 | 0.6393 | 0.1676 | 0.1376 | | 0.048 | 86.0 | 2150 | 1.2541 | 0.6375 | 0.4806 | 3.0527 | 0.6375 | 0.6399 | 0.1691 | 0.1375 | | 0.048 | 87.0 | 2175 | 1.2542 | 0.6375 | 0.4805 | 3.0525 | 0.6375 | 0.6394 | 0.1677 | 0.1376 | | 0.048 | 88.0 | 2200 | 1.2542 | 0.6375 | 0.4806 | 3.0525 | 0.6375 | 0.6393 | 0.1651 | 0.1375 | | 0.048 | 89.0 | 2225 | 1.2543 | 0.6375 | 0.4805 | 3.0525 | 0.6375 | 0.6394 | 0.1601 | 0.1375 | | 0.048 | 90.0 | 2250 | 1.2543 | 0.6375 | 0.4805 | 3.0521 | 0.6375 | 0.6394 | 0.1661 | 0.1375 | | 0.048 | 91.0 | 2275 | 1.2541 | 0.6375 | 0.4805 | 3.0521 | 0.6375 | 0.6394 | 0.1665 | 0.1376 | | 0.048 | 92.0 | 2300 | 1.2542 | 0.6375 | 0.4805 | 3.0521 | 0.6375 | 0.6394 | 0.1638 | 0.1375 | | 0.048 | 93.0 | 2325 | 1.2544 | 0.6375 | 0.4805 | 3.0518 | 0.6375 | 0.6394 | 0.1671 | 0.1376 | | 0.048 | 94.0 | 2350 | 1.2543 | 0.6375 | 0.4805 | 3.0519 | 0.6375 | 0.6394 | 0.1601 | 0.1376 | | 0.048 | 95.0 | 2375 | 1.2544 | 0.6375 | 0.4805 | 3.0518 | 0.6375 | 0.6394 | 0.1638 | 0.1376 | | 0.048 | 96.0 | 2400 | 1.2544 | 0.6375 | 0.4805 | 3.0518 | 0.6375 | 0.6394 | 0.1638 | 0.1376 | | 0.048 | 97.0 | 2425 | 1.2544 | 0.6375 | 0.4805 | 3.0517 | 0.6375 | 0.6394 | 0.1655 | 0.1376 | | 0.048 | 98.0 | 2450 | 1.2544 | 0.6375 | 0.4805 | 3.0517 | 0.6375 | 0.6394 | 0.1638 | 0.1376 | | 0.048 | 99.0 | 2475 | 1.2544 | 0.6375 | 0.4805 | 3.0517 | 0.6375 | 0.6394 | 0.1654 | 0.1376 | | 0.0478 | 100.0 | 2500 | 1.2544 | 0.6375 | 0.4805 | 3.0517 | 0.6375 | 0.6394 | 0.1654 | 0.1376 | ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.12.0 - Tokenizers 0.12.1
MaitreHibou/ppo-SnowballTarget
MaitreHibou
2023-07-11T01:00:11Z
16
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-07-11T01:00:06Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: MaitreHibou/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
casque/Linav0.2
casque
2023-07-11T00:46:22Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-11T00:43:30Z
--- license: creativeml-openrail-m ---
casque/CrystalMaidenv0.2
casque
2023-07-11T00:42:48Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-11T00:39:34Z
--- license: creativeml-openrail-m ---
foster123/test
foster123
2023-07-11T00:39:29Z
2
0
peft
[ "peft", "region:us" ]
null
2023-07-10T06:23:46Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
mrovejaxd/ABL_a
mrovejaxd
2023-07-10T23:53:17Z
104
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-19T13:23:00Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: ABL_a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ABL_a This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7326 - Accuracy: 0.7 - F1: 0.7004 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0+cu117 - Datasets 2.12.0 - Tokenizers 0.13.2
jordyvl/vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.5
jordyvl
2023-07-10T23:45:37Z
164
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-10T22:31:20Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.5 This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3067 - Accuracy: 0.64 - Brier Loss: 0.4889 - Nll: 2.7590 - F1 Micro: 0.64 - F1 Macro: 0.6422 - Ece: 0.1482 - Aurc: 0.1465 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 25 | 3.5436 | 0.1275 | 0.9288 | 15.5432 | 0.1275 | 0.1271 | 0.1597 | 0.8570 | | No log | 2.0 | 50 | 2.6686 | 0.4025 | 0.7453 | 9.6119 | 0.4025 | 0.3632 | 0.1957 | 0.3597 | | No log | 3.0 | 75 | 2.0708 | 0.495 | 0.6252 | 5.5129 | 0.495 | 0.4799 | 0.1581 | 0.2478 | | No log | 4.0 | 100 | 1.8472 | 0.5475 | 0.5792 | 4.3917 | 0.5475 | 0.5504 | 0.1665 | 0.2138 | | No log | 5.0 | 125 | 1.8657 | 0.535 | 0.6173 | 4.2639 | 0.535 | 0.5241 | 0.1890 | 0.2311 | | No log | 6.0 | 150 | 1.7791 | 0.5725 | 0.5777 | 3.7697 | 0.5725 | 0.5672 | 0.1634 | 0.2157 | | No log | 7.0 | 175 | 1.7957 | 0.555 | 0.5997 | 4.3973 | 0.555 | 0.5219 | 0.1885 | 0.2118 | | No log | 8.0 | 200 | 1.7306 | 0.56 | 0.5858 | 4.3403 | 0.56 | 0.5499 | 0.1808 | 0.2076 | | No log | 9.0 | 225 | 1.9129 | 0.55 | 0.6072 | 5.3639 | 0.55 | 0.5253 | 0.2106 | 0.2034 | | No log | 10.0 | 250 | 1.9057 | 0.565 | 0.6050 | 4.7359 | 0.565 | 0.5514 | 0.2051 | 0.2211 | | No log | 11.0 | 275 | 1.8169 | 0.5825 | 0.5990 | 4.2043 | 0.5825 | 0.5787 | 0.2048 | 0.2099 | | No log | 12.0 | 300 | 1.9194 | 0.55 | 0.6387 | 3.9608 | 0.55 | 0.5457 | 0.2246 | 0.2475 | | No log | 13.0 | 325 | 1.7830 | 0.585 | 0.5961 | 3.9468 | 0.585 | 0.5717 | 0.1971 | 0.2235 | | No log | 14.0 | 350 | 1.8241 | 0.5575 | 0.6112 | 3.6498 | 0.5575 | 0.5554 | 0.2123 | 0.2116 | | No log | 15.0 | 375 | 1.8344 | 0.58 | 0.5950 | 3.9880 | 0.58 | 0.5741 | 0.1872 | 0.2168 | | No log | 16.0 | 400 | 1.8909 | 0.57 | 0.5987 | 4.6112 | 0.57 | 0.5596 | 0.2096 | 0.2100 | | No log | 17.0 | 425 | 1.6662 | 0.585 | 0.5645 | 4.0403 | 0.585 | 0.5752 | 0.2000 | 0.1872 | | No log | 18.0 | 450 | 1.5986 | 0.6175 | 0.5315 | 3.8888 | 0.6175 | 0.6162 | 0.1724 | 0.1660 | | No log | 19.0 | 475 | 1.5392 | 0.5925 | 0.5593 | 2.8593 | 0.5925 | 0.5823 | 0.2056 | 0.1777 | | 0.718 | 20.0 | 500 | 1.5257 | 0.595 | 0.5386 | 3.5024 | 0.595 | 0.5817 | 0.1909 | 0.1680 | | 0.718 | 21.0 | 525 | 1.6699 | 0.6125 | 0.5570 | 3.9342 | 0.6125 | 0.6121 | 0.2006 | 0.1898 | | 0.718 | 22.0 | 550 | 1.5804 | 0.605 | 0.5542 | 3.7562 | 0.605 | 0.5828 | 0.1888 | 0.1826 | | 0.718 | 23.0 | 575 | 1.5580 | 0.6025 | 0.5407 | 3.4731 | 0.6025 | 0.5877 | 0.1780 | 0.1693 | | 0.718 | 24.0 | 600 | 1.5693 | 0.58 | 0.5717 | 3.1009 | 0.58 | 0.5830 | 0.1954 | 0.2041 | | 0.718 | 25.0 | 625 | 1.6368 | 0.57 | 0.5826 | 3.7067 | 0.57 | 0.5684 | 0.2027 | 0.2116 | | 0.718 | 26.0 | 650 | 1.3959 | 0.635 | 0.5018 | 3.1312 | 0.635 | 0.6342 | 0.1814 | 0.1544 | | 0.718 | 27.0 | 675 | 1.4555 | 0.635 | 0.5130 | 3.1374 | 0.635 | 0.6344 | 0.1733 | 0.1727 | | 0.718 | 28.0 | 700 | 1.5010 | 0.605 | 0.5361 | 3.6647 | 0.605 | 0.6030 | 0.1811 | 0.1725 | | 0.718 | 29.0 | 725 | 1.6266 | 0.585 | 0.5777 | 3.1233 | 0.585 | 0.5757 | 0.1955 | 0.1965 | | 0.718 | 30.0 | 750 | 1.4467 | 0.635 | 0.5196 | 3.3019 | 0.635 | 0.6371 | 0.1856 | 0.1759 | | 0.718 | 31.0 | 775 | 1.5051 | 0.6 | 0.5439 | 3.5968 | 0.6 | 0.5950 | 0.2020 | 0.1776 | | 0.718 | 32.0 | 800 | 1.3890 | 0.6325 | 0.5001 | 3.2391 | 0.6325 | 0.6310 | 0.1639 | 0.1502 | | 0.718 | 33.0 | 825 | 1.4150 | 0.6075 | 0.5208 | 3.4287 | 0.6075 | 0.6102 | 0.1862 | 0.1667 | | 0.718 | 34.0 | 850 | 1.3743 | 0.6125 | 0.5133 | 3.0028 | 0.6125 | 0.6123 | 0.1927 | 0.1585 | | 0.718 | 35.0 | 875 | 1.3564 | 0.6325 | 0.4960 | 2.8056 | 0.6325 | 0.6344 | 0.1624 | 0.1490 | | 0.718 | 36.0 | 900 | 1.3634 | 0.6325 | 0.5005 | 2.5056 | 0.6325 | 0.6352 | 0.1808 | 0.1513 | | 0.718 | 37.0 | 925 | 1.3707 | 0.62 | 0.4991 | 3.2196 | 0.62 | 0.6209 | 0.1509 | 0.1530 | | 0.718 | 38.0 | 950 | 1.3311 | 0.635 | 0.4937 | 2.8078 | 0.635 | 0.6383 | 0.1645 | 0.1478 | | 0.718 | 39.0 | 975 | 1.2896 | 0.635 | 0.4838 | 2.7910 | 0.635 | 0.6319 | 0.1524 | 0.1420 | | 0.0894 | 40.0 | 1000 | 1.3209 | 0.65 | 0.4935 | 2.7909 | 0.65 | 0.6523 | 0.1674 | 0.1442 | | 0.0894 | 41.0 | 1025 | 1.3280 | 0.6525 | 0.4903 | 2.9461 | 0.6525 | 0.6536 | 0.1645 | 0.1457 | | 0.0894 | 42.0 | 1050 | 1.3220 | 0.65 | 0.4893 | 2.9579 | 0.65 | 0.6505 | 0.1577 | 0.1480 | | 0.0894 | 43.0 | 1075 | 1.3155 | 0.6425 | 0.4912 | 2.8699 | 0.6425 | 0.6465 | 0.1479 | 0.1461 | | 0.0894 | 44.0 | 1100 | 1.3243 | 0.6375 | 0.4946 | 2.9297 | 0.6375 | 0.6393 | 0.1624 | 0.1494 | | 0.0894 | 45.0 | 1125 | 1.3123 | 0.645 | 0.4891 | 2.8813 | 0.645 | 0.6464 | 0.1710 | 0.1443 | | 0.0894 | 46.0 | 1150 | 1.3051 | 0.6425 | 0.4859 | 2.8460 | 0.6425 | 0.6434 | 0.1570 | 0.1431 | | 0.0894 | 47.0 | 1175 | 1.3082 | 0.645 | 0.4871 | 2.7740 | 0.645 | 0.6460 | 0.1740 | 0.1449 | | 0.0894 | 48.0 | 1200 | 1.3026 | 0.6475 | 0.4849 | 2.7773 | 0.6475 | 0.6505 | 0.1800 | 0.1440 | | 0.0894 | 49.0 | 1225 | 1.3141 | 0.6375 | 0.4895 | 2.7660 | 0.6375 | 0.6396 | 0.1737 | 0.1463 | | 0.0894 | 50.0 | 1250 | 1.3147 | 0.6325 | 0.4879 | 2.7744 | 0.6325 | 0.6351 | 0.1609 | 0.1450 | | 0.0894 | 51.0 | 1275 | 1.3080 | 0.64 | 0.4883 | 2.7668 | 0.64 | 0.6423 | 0.1636 | 0.1450 | | 0.0894 | 52.0 | 1300 | 1.3087 | 0.6425 | 0.4890 | 2.8436 | 0.6425 | 0.6448 | 0.1520 | 0.1462 | | 0.0894 | 53.0 | 1325 | 1.3101 | 0.64 | 0.4888 | 2.7708 | 0.64 | 0.6415 | 0.1602 | 0.1452 | | 0.0894 | 54.0 | 1350 | 1.3181 | 0.6425 | 0.4927 | 2.8450 | 0.6425 | 0.6446 | 0.1732 | 0.1490 | | 0.0894 | 55.0 | 1375 | 1.3144 | 0.6375 | 0.4915 | 2.7718 | 0.6375 | 0.6399 | 0.1542 | 0.1473 | | 0.0894 | 56.0 | 1400 | 1.3138 | 0.645 | 0.4923 | 2.6836 | 0.645 | 0.6476 | 0.1721 | 0.1471 | | 0.0894 | 57.0 | 1425 | 1.3156 | 0.645 | 0.4920 | 2.7653 | 0.645 | 0.6468 | 0.1642 | 0.1470 | | 0.0894 | 58.0 | 1450 | 1.3161 | 0.6425 | 0.4919 | 2.7644 | 0.6425 | 0.6450 | 0.1617 | 0.1472 | | 0.0894 | 59.0 | 1475 | 1.3069 | 0.6375 | 0.4877 | 2.7658 | 0.6375 | 0.6396 | 0.1635 | 0.1455 | | 0.0506 | 60.0 | 1500 | 1.3109 | 0.645 | 0.4904 | 2.8426 | 0.645 | 0.6464 | 0.1605 | 0.1467 | | 0.0506 | 61.0 | 1525 | 1.3111 | 0.6425 | 0.4893 | 2.7618 | 0.6425 | 0.6446 | 0.1704 | 0.1461 | | 0.0506 | 62.0 | 1550 | 1.3053 | 0.6425 | 0.4884 | 2.7648 | 0.6425 | 0.6449 | 0.1602 | 0.1457 | | 0.0506 | 63.0 | 1575 | 1.3097 | 0.64 | 0.4887 | 2.7618 | 0.64 | 0.6423 | 0.1632 | 0.1463 | | 0.0506 | 64.0 | 1600 | 1.3106 | 0.645 | 0.4912 | 2.7681 | 0.645 | 0.6473 | 0.1688 | 0.1469 | | 0.0506 | 65.0 | 1625 | 1.3095 | 0.64 | 0.4902 | 2.7589 | 0.64 | 0.6419 | 0.1560 | 0.1468 | | 0.0506 | 66.0 | 1650 | 1.3073 | 0.645 | 0.4895 | 2.7642 | 0.645 | 0.6473 | 0.1800 | 0.1463 | | 0.0506 | 67.0 | 1675 | 1.3041 | 0.64 | 0.4880 | 2.7619 | 0.64 | 0.6424 | 0.1670 | 0.1454 | | 0.0506 | 68.0 | 1700 | 1.3062 | 0.64 | 0.4887 | 2.7623 | 0.64 | 0.6423 | 0.1671 | 0.1466 | | 0.0506 | 69.0 | 1725 | 1.3075 | 0.64 | 0.4888 | 2.7628 | 0.64 | 0.6424 | 0.1533 | 0.1459 | | 0.0506 | 70.0 | 1750 | 1.3089 | 0.64 | 0.4898 | 2.7607 | 0.64 | 0.6425 | 0.1805 | 0.1466 | | 0.0506 | 71.0 | 1775 | 1.3068 | 0.64 | 0.4889 | 2.7600 | 0.64 | 0.6424 | 0.1592 | 0.1458 | | 0.0506 | 72.0 | 1800 | 1.3076 | 0.6425 | 0.4894 | 2.7599 | 0.6425 | 0.6451 | 0.1766 | 0.1461 | | 0.0506 | 73.0 | 1825 | 1.3071 | 0.6425 | 0.4890 | 2.7609 | 0.6425 | 0.6451 | 0.1538 | 0.1460 | | 0.0506 | 74.0 | 1850 | 1.3062 | 0.64 | 0.4887 | 2.7601 | 0.64 | 0.6422 | 0.1678 | 0.1461 | | 0.0506 | 75.0 | 1875 | 1.3076 | 0.6425 | 0.4891 | 2.7598 | 0.6425 | 0.6451 | 0.1660 | 0.1461 | | 0.0506 | 76.0 | 1900 | 1.3067 | 0.6425 | 0.4890 | 2.7607 | 0.6425 | 0.6450 | 0.1510 | 0.1461 | | 0.0506 | 77.0 | 1925 | 1.3073 | 0.6425 | 0.4891 | 2.7596 | 0.6425 | 0.6451 | 0.1558 | 0.1461 | | 0.0506 | 78.0 | 1950 | 1.3075 | 0.6425 | 0.4894 | 2.7612 | 0.6425 | 0.6451 | 0.1604 | 0.1461 | | 0.0506 | 79.0 | 1975 | 1.3071 | 0.6425 | 0.4889 | 2.7602 | 0.6425 | 0.6452 | 0.1575 | 0.1460 | | 0.0486 | 80.0 | 2000 | 1.3065 | 0.6425 | 0.4889 | 2.7599 | 0.6425 | 0.6450 | 0.1451 | 0.1461 | | 0.0486 | 81.0 | 2025 | 1.3066 | 0.6425 | 0.4889 | 2.7594 | 0.6425 | 0.6451 | 0.1532 | 0.1460 | | 0.0486 | 82.0 | 2050 | 1.3069 | 0.64 | 0.4891 | 2.7599 | 0.64 | 0.6424 | 0.1468 | 0.1463 | | 0.0486 | 83.0 | 2075 | 1.3068 | 0.64 | 0.4889 | 2.7599 | 0.64 | 0.6422 | 0.1551 | 0.1466 | | 0.0486 | 84.0 | 2100 | 1.3067 | 0.64 | 0.4889 | 2.7592 | 0.64 | 0.6424 | 0.1445 | 0.1463 | | 0.0486 | 85.0 | 2125 | 1.3065 | 0.64 | 0.4889 | 2.7591 | 0.64 | 0.6422 | 0.1506 | 0.1465 | | 0.0486 | 86.0 | 2150 | 1.3067 | 0.64 | 0.4889 | 2.7589 | 0.64 | 0.6422 | 0.1637 | 0.1465 | | 0.0486 | 87.0 | 2175 | 1.3069 | 0.64 | 0.4889 | 2.7592 | 0.64 | 0.6422 | 0.1530 | 0.1465 | | 0.0486 | 88.0 | 2200 | 1.3069 | 0.64 | 0.4890 | 2.7591 | 0.64 | 0.6422 | 0.1503 | 0.1465 | | 0.0486 | 89.0 | 2225 | 1.3067 | 0.64 | 0.4889 | 2.7592 | 0.64 | 0.6422 | 0.1547 | 0.1464 | | 0.0486 | 90.0 | 2250 | 1.3069 | 0.64 | 0.4890 | 2.7592 | 0.64 | 0.6422 | 0.1477 | 0.1465 | | 0.0486 | 91.0 | 2275 | 1.3067 | 0.64 | 0.4889 | 2.7590 | 0.64 | 0.6422 | 0.1508 | 0.1465 | | 0.0486 | 92.0 | 2300 | 1.3066 | 0.64 | 0.4888 | 2.7591 | 0.64 | 0.6422 | 0.1484 | 0.1464 | | 0.0486 | 93.0 | 2325 | 1.3068 | 0.64 | 0.4889 | 2.7588 | 0.64 | 0.6422 | 0.1485 | 0.1465 | | 0.0486 | 94.0 | 2350 | 1.3067 | 0.64 | 0.4889 | 2.7590 | 0.64 | 0.6422 | 0.1482 | 0.1465 | | 0.0486 | 95.0 | 2375 | 1.3068 | 0.64 | 0.4889 | 2.7589 | 0.64 | 0.6422 | 0.1482 | 0.1465 | | 0.0486 | 96.0 | 2400 | 1.3067 | 0.64 | 0.4889 | 2.7589 | 0.64 | 0.6422 | 0.1482 | 0.1464 | | 0.0486 | 97.0 | 2425 | 1.3068 | 0.64 | 0.4889 | 2.7590 | 0.64 | 0.6422 | 0.1482 | 0.1465 | | 0.0486 | 98.0 | 2450 | 1.3067 | 0.64 | 0.4889 | 2.7589 | 0.64 | 0.6422 | 0.1482 | 0.1464 | | 0.0486 | 99.0 | 2475 | 1.3067 | 0.64 | 0.4889 | 2.7589 | 0.64 | 0.6422 | 0.1482 | 0.1465 | | 0.0484 | 100.0 | 2500 | 1.3067 | 0.64 | 0.4889 | 2.7590 | 0.64 | 0.6422 | 0.1482 | 0.1465 | ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.12.0 - Tokenizers 0.12.1
wesley7137/fal-7B-shard-quantum
wesley7137
2023-07-10T22:53:05Z
0
0
peft
[ "peft", "pytorch", "RefinedWebModel", "custom_code", "region:us" ]
null
2023-07-10T22:04:14Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0
jordyvl/vit-small_tobacco3482_kd_CEKD_t5.0_a0.9
jordyvl
2023-07-10T22:40:13Z
161
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-10T22:00:19Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-small_tobacco3482_kd_CEKD_t5.0_a0.9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-small_tobacco3482_kd_CEKD_t5.0_a0.9 This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5373 - Accuracy: 0.85 - Brier Loss: 0.2432 - Nll: 1.1157 - F1 Micro: 0.85 - F1 Macro: 0.8450 - Ece: 0.1621 - Aurc: 0.0427 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 7 | 2.1036 | 0.215 | 0.8753 | 5.3195 | 0.2150 | 0.1264 | 0.2571 | 0.6923 | | No log | 2.0 | 14 | 1.6952 | 0.405 | 0.7407 | 3.4929 | 0.405 | 0.2416 | 0.2907 | 0.4040 | | No log | 3.0 | 21 | 1.1843 | 0.62 | 0.5633 | 2.0113 | 0.62 | 0.5725 | 0.2740 | 0.2014 | | No log | 4.0 | 28 | 0.8797 | 0.71 | 0.4080 | 1.7043 | 0.7100 | 0.6683 | 0.2024 | 0.1125 | | No log | 5.0 | 35 | 0.8570 | 0.715 | 0.3837 | 1.6476 | 0.715 | 0.7280 | 0.2189 | 0.1079 | | No log | 6.0 | 42 | 0.7484 | 0.775 | 0.3285 | 1.5962 | 0.775 | 0.7668 | 0.1873 | 0.0816 | | No log | 7.0 | 49 | 0.7337 | 0.79 | 0.3131 | 1.5377 | 0.79 | 0.7779 | 0.1904 | 0.0771 | | No log | 8.0 | 56 | 0.6709 | 0.795 | 0.3012 | 1.2156 | 0.795 | 0.7776 | 0.1939 | 0.0761 | | No log | 9.0 | 63 | 0.6901 | 0.795 | 0.3069 | 1.4725 | 0.795 | 0.7916 | 0.1882 | 0.0769 | | No log | 10.0 | 70 | 0.7960 | 0.75 | 0.3586 | 1.4426 | 0.75 | 0.7406 | 0.1868 | 0.0976 | | No log | 11.0 | 77 | 0.7489 | 0.77 | 0.3296 | 1.6202 | 0.7700 | 0.7794 | 0.2020 | 0.0878 | | No log | 12.0 | 84 | 0.7068 | 0.785 | 0.3270 | 1.4127 | 0.785 | 0.7812 | 0.1922 | 0.0759 | | No log | 13.0 | 91 | 0.6687 | 0.79 | 0.3050 | 1.3820 | 0.79 | 0.7945 | 0.1818 | 0.0625 | | No log | 14.0 | 98 | 0.6052 | 0.79 | 0.2854 | 1.0602 | 0.79 | 0.7716 | 0.1702 | 0.0590 | | No log | 15.0 | 105 | 0.6369 | 0.795 | 0.2959 | 1.0580 | 0.795 | 0.7953 | 0.1709 | 0.0603 | | No log | 16.0 | 112 | 0.6204 | 0.81 | 0.2816 | 1.1886 | 0.81 | 0.8050 | 0.1657 | 0.0702 | | No log | 17.0 | 119 | 0.5648 | 0.83 | 0.2475 | 1.2506 | 0.83 | 0.8241 | 0.1347 | 0.0612 | | No log | 18.0 | 126 | 0.5849 | 0.83 | 0.2672 | 1.2245 | 0.83 | 0.8155 | 0.1646 | 0.0601 | | No log | 19.0 | 133 | 0.5536 | 0.835 | 0.2475 | 1.0514 | 0.835 | 0.8254 | 0.1683 | 0.0531 | | No log | 20.0 | 140 | 0.5689 | 0.835 | 0.2513 | 1.2369 | 0.835 | 0.8437 | 0.1722 | 0.0489 | | No log | 21.0 | 147 | 0.5540 | 0.83 | 0.2485 | 1.2139 | 0.83 | 0.8165 | 0.1641 | 0.0608 | | No log | 22.0 | 154 | 0.5352 | 0.835 | 0.2402 | 1.0108 | 0.835 | 0.8295 | 0.1408 | 0.0430 | | No log | 23.0 | 161 | 0.5380 | 0.84 | 0.2403 | 1.2280 | 0.8400 | 0.8347 | 0.1405 | 0.0436 | | No log | 24.0 | 168 | 0.5422 | 0.835 | 0.2471 | 1.0204 | 0.835 | 0.8324 | 0.1606 | 0.0445 | | No log | 25.0 | 175 | 0.5342 | 0.85 | 0.2404 | 1.0767 | 0.85 | 0.8487 | 0.1469 | 0.0432 | | No log | 26.0 | 182 | 0.5374 | 0.84 | 0.2429 | 1.0774 | 0.8400 | 0.8334 | 0.1420 | 0.0462 | | No log | 27.0 | 189 | 0.5311 | 0.85 | 0.2395 | 1.0748 | 0.85 | 0.8487 | 0.1439 | 0.0446 | | No log | 28.0 | 196 | 0.5298 | 0.85 | 0.2384 | 1.1337 | 0.85 | 0.8487 | 0.1570 | 0.0437 | | No log | 29.0 | 203 | 0.5387 | 0.845 | 0.2435 | 1.1319 | 0.845 | 0.8424 | 0.1539 | 0.0458 | | No log | 30.0 | 210 | 0.5361 | 0.85 | 0.2430 | 1.0648 | 0.85 | 0.8450 | 0.1679 | 0.0431 | | No log | 31.0 | 217 | 0.5339 | 0.85 | 0.2413 | 1.0676 | 0.85 | 0.8487 | 0.1646 | 0.0428 | | No log | 32.0 | 224 | 0.5345 | 0.85 | 0.2421 | 1.0709 | 0.85 | 0.8487 | 0.1476 | 0.0440 | | No log | 33.0 | 231 | 0.5343 | 0.85 | 0.2421 | 1.1236 | 0.85 | 0.8450 | 0.1621 | 0.0431 | | No log | 34.0 | 238 | 0.5353 | 0.845 | 0.2426 | 1.1244 | 0.845 | 0.8424 | 0.1710 | 0.0428 | | No log | 35.0 | 245 | 0.5346 | 0.85 | 0.2423 | 1.0649 | 0.85 | 0.8487 | 0.1520 | 0.0440 | | No log | 36.0 | 252 | 0.5356 | 0.855 | 0.2422 | 1.1241 | 0.855 | 0.8517 | 0.1814 | 0.0429 | | No log | 37.0 | 259 | 0.5357 | 0.85 | 0.2426 | 1.1237 | 0.85 | 0.8450 | 0.1670 | 0.0425 | | No log | 38.0 | 266 | 0.5356 | 0.845 | 0.2426 | 1.1226 | 0.845 | 0.8419 | 0.1607 | 0.0435 | | No log | 39.0 | 273 | 0.5347 | 0.855 | 0.2420 | 1.0739 | 0.855 | 0.8517 | 0.1597 | 0.0427 | | No log | 40.0 | 280 | 0.5356 | 0.855 | 0.2423 | 1.1203 | 0.855 | 0.8517 | 0.1676 | 0.0435 | | No log | 41.0 | 287 | 0.5365 | 0.85 | 0.2431 | 1.1199 | 0.85 | 0.8450 | 0.1780 | 0.0429 | | No log | 42.0 | 294 | 0.5356 | 0.85 | 0.2426 | 1.1173 | 0.85 | 0.8450 | 0.1653 | 0.0430 | | No log | 43.0 | 301 | 0.5363 | 0.85 | 0.2428 | 1.1189 | 0.85 | 0.8450 | 0.1550 | 0.0435 | | No log | 44.0 | 308 | 0.5345 | 0.85 | 0.2418 | 1.1193 | 0.85 | 0.8450 | 0.1590 | 0.0428 | | No log | 45.0 | 315 | 0.5374 | 0.85 | 0.2435 | 1.1202 | 0.85 | 0.8450 | 0.1633 | 0.0435 | | No log | 46.0 | 322 | 0.5355 | 0.85 | 0.2423 | 1.1183 | 0.85 | 0.8450 | 0.1564 | 0.0428 | | No log | 47.0 | 329 | 0.5354 | 0.85 | 0.2425 | 1.1176 | 0.85 | 0.8450 | 0.1509 | 0.0429 | | No log | 48.0 | 336 | 0.5369 | 0.85 | 0.2433 | 1.1177 | 0.85 | 0.8450 | 0.1517 | 0.0432 | | No log | 49.0 | 343 | 0.5361 | 0.85 | 0.2428 | 1.1182 | 0.85 | 0.8450 | 0.1490 | 0.0428 | | No log | 50.0 | 350 | 0.5364 | 0.85 | 0.2431 | 1.1179 | 0.85 | 0.8450 | 0.1654 | 0.0430 | | No log | 51.0 | 357 | 0.5365 | 0.85 | 0.2428 | 1.1185 | 0.85 | 0.8450 | 0.1729 | 0.0432 | | No log | 52.0 | 364 | 0.5364 | 0.85 | 0.2430 | 1.1165 | 0.85 | 0.8450 | 0.1614 | 0.0429 | | No log | 53.0 | 371 | 0.5362 | 0.85 | 0.2429 | 1.1167 | 0.85 | 0.8450 | 0.1694 | 0.0430 | | No log | 54.0 | 378 | 0.5369 | 0.85 | 0.2432 | 1.1170 | 0.85 | 0.8450 | 0.1597 | 0.0432 | | No log | 55.0 | 385 | 0.5368 | 0.85 | 0.2430 | 1.1168 | 0.85 | 0.8450 | 0.1670 | 0.0429 | | No log | 56.0 | 392 | 0.5367 | 0.85 | 0.2430 | 1.1180 | 0.85 | 0.8450 | 0.1619 | 0.0430 | | No log | 57.0 | 399 | 0.5364 | 0.85 | 0.2429 | 1.1163 | 0.85 | 0.8450 | 0.1649 | 0.0429 | | No log | 58.0 | 406 | 0.5364 | 0.85 | 0.2430 | 1.1156 | 0.85 | 0.8450 | 0.1611 | 0.0429 | | No log | 59.0 | 413 | 0.5365 | 0.85 | 0.2428 | 1.1163 | 0.85 | 0.8450 | 0.1591 | 0.0429 | | No log | 60.0 | 420 | 0.5364 | 0.85 | 0.2429 | 1.1155 | 0.85 | 0.8450 | 0.1588 | 0.0429 | | No log | 61.0 | 427 | 0.5370 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1772 | 0.0432 | | No log | 62.0 | 434 | 0.5367 | 0.85 | 0.2429 | 1.1167 | 0.85 | 0.8450 | 0.1622 | 0.0429 | | No log | 63.0 | 441 | 0.5362 | 0.85 | 0.2428 | 1.1162 | 0.85 | 0.8450 | 0.1503 | 0.0428 | | No log | 64.0 | 448 | 0.5372 | 0.85 | 0.2433 | 1.1161 | 0.85 | 0.8450 | 0.1616 | 0.0432 | | No log | 65.0 | 455 | 0.5371 | 0.85 | 0.2431 | 1.1162 | 0.85 | 0.8450 | 0.1499 | 0.0429 | | No log | 66.0 | 462 | 0.5367 | 0.85 | 0.2430 | 1.1160 | 0.85 | 0.8450 | 0.1591 | 0.0427 | | No log | 67.0 | 469 | 0.5367 | 0.85 | 0.2430 | 1.1164 | 0.85 | 0.8450 | 0.1562 | 0.0428 | | No log | 68.0 | 476 | 0.5368 | 0.85 | 0.2430 | 1.1168 | 0.85 | 0.8450 | 0.1556 | 0.0427 | | No log | 69.0 | 483 | 0.5368 | 0.85 | 0.2431 | 1.1158 | 0.85 | 0.8450 | 0.1593 | 0.0428 | | No log | 70.0 | 490 | 0.5372 | 0.85 | 0.2432 | 1.1162 | 0.85 | 0.8450 | 0.1628 | 0.0428 | | No log | 71.0 | 497 | 0.5371 | 0.85 | 0.2432 | 1.1163 | 0.85 | 0.8450 | 0.1599 | 0.0429 | | 0.1708 | 72.0 | 504 | 0.5370 | 0.85 | 0.2430 | 1.1161 | 0.85 | 0.8450 | 0.1559 | 0.0430 | | 0.1708 | 73.0 | 511 | 0.5372 | 0.85 | 0.2433 | 1.1154 | 0.85 | 0.8450 | 0.1556 | 0.0428 | | 0.1708 | 74.0 | 518 | 0.5370 | 0.85 | 0.2429 | 1.1165 | 0.85 | 0.8450 | 0.1540 | 0.0428 | | 0.1708 | 75.0 | 525 | 0.5371 | 0.85 | 0.2431 | 1.1161 | 0.85 | 0.8450 | 0.1616 | 0.0427 | | 0.1708 | 76.0 | 532 | 0.5369 | 0.85 | 0.2431 | 1.1161 | 0.85 | 0.8450 | 0.1619 | 0.0427 | | 0.1708 | 77.0 | 539 | 0.5369 | 0.85 | 0.2430 | 1.1156 | 0.85 | 0.8450 | 0.1623 | 0.0429 | | 0.1708 | 78.0 | 546 | 0.5372 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1619 | 0.0427 | | 0.1708 | 79.0 | 553 | 0.5375 | 0.85 | 0.2433 | 1.1162 | 0.85 | 0.8450 | 0.1688 | 0.0429 | | 0.1708 | 80.0 | 560 | 0.5372 | 0.85 | 0.2432 | 1.1160 | 0.85 | 0.8450 | 0.1623 | 0.0429 | | 0.1708 | 81.0 | 567 | 0.5373 | 0.85 | 0.2432 | 1.1162 | 0.85 | 0.8450 | 0.1620 | 0.0428 | | 0.1708 | 82.0 | 574 | 0.5374 | 0.85 | 0.2433 | 1.1160 | 0.85 | 0.8450 | 0.1622 | 0.0428 | | 0.1708 | 83.0 | 581 | 0.5372 | 0.85 | 0.2432 | 1.1159 | 0.85 | 0.8450 | 0.1622 | 0.0428 | | 0.1708 | 84.0 | 588 | 0.5371 | 0.85 | 0.2431 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 | | 0.1708 | 85.0 | 595 | 0.5372 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1687 | 0.0426 | | 0.1708 | 86.0 | 602 | 0.5372 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1619 | 0.0426 | | 0.1708 | 87.0 | 609 | 0.5374 | 0.85 | 0.2432 | 1.1159 | 0.85 | 0.8450 | 0.1687 | 0.0428 | | 0.1708 | 88.0 | 616 | 0.5373 | 0.85 | 0.2432 | 1.1160 | 0.85 | 0.8450 | 0.1620 | 0.0427 | | 0.1708 | 89.0 | 623 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1620 | 0.0427 | | 0.1708 | 90.0 | 630 | 0.5373 | 0.85 | 0.2432 | 1.1156 | 0.85 | 0.8450 | 0.1620 | 0.0427 | | 0.1708 | 91.0 | 637 | 0.5372 | 0.85 | 0.2432 | 1.1156 | 0.85 | 0.8450 | 0.1620 | 0.0427 | | 0.1708 | 92.0 | 644 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1620 | 0.0427 | | 0.1708 | 93.0 | 651 | 0.5372 | 0.85 | 0.2432 | 1.1156 | 0.85 | 0.8450 | 0.1620 | 0.0427 | | 0.1708 | 94.0 | 658 | 0.5373 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1620 | 0.0427 | | 0.1708 | 95.0 | 665 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 | | 0.1708 | 96.0 | 672 | 0.5372 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 | | 0.1708 | 97.0 | 679 | 0.5372 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1620 | 0.0427 | | 0.1708 | 98.0 | 686 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 | | 0.1708 | 99.0 | 693 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 | | 0.1708 | 100.0 | 700 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
Raizel123/SNoonzlora
Raizel123
2023-07-10T22:35:30Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-10T22:32:26Z
--- license: creativeml-openrail-m ---
maidacundo/falcon_qlora_sql_r2
maidacundo
2023-07-10T22:30:14Z
0
0
null
[ "generated_from_trainer", "dataset:spider", "base_model:tiiuae/falcon-7b", "base_model:finetune:tiiuae/falcon-7b", "license:apache-2.0", "region:us" ]
null
2023-07-10T09:40:03Z
--- license: apache-2.0 base_model: tiiuae/falcon-7b tags: - generated_from_trainer datasets: - spider model-index: - name: falcon_qlora_sql_r2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon_qlora_sql_r2 This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the spider dataset. It achieves the following results on the evaluation set: - Loss: 0.1735 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 43.7 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2993 | 0.23 | 100 | 0.2863 | | 0.8003 | 0.46 | 200 | 0.3358 | | 0.1872 | 0.68 | 300 | 0.2424 | | 0.1267 | 0.91 | 400 | 0.2362 | | 0.2214 | 1.14 | 500 | 0.2564 | | 0.2885 | 1.37 | 600 | 0.2187 | | 0.1654 | 1.6 | 700 | 0.1988 | | 0.1633 | 1.83 | 800 | 0.2062 | | 0.0381 | 2.05 | 900 | 0.1868 | | 0.0633 | 2.28 | 1000 | 0.1767 | | 0.163 | 2.51 | 1100 | 0.1861 | | 0.1718 | 2.74 | 1200 | 0.1875 | | 0.1743 | 2.97 | 1300 | 0.1854 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
jordyvl/vit-small_rvl_cdip_100_examples_per_class_kd_MSE
jordyvl
2023-07-10T22:30:03Z
163
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-10T21:13:38Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-small_rvl_cdip_100_examples_per_class_kd_MSE results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-small_rvl_cdip_100_examples_per_class_kd_MSE This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4673 - Accuracy: 0.6425 - Brier Loss: 0.4763 - Nll: 3.0680 - F1 Micro: 0.6425 - F1 Macro: 0.6485 - Ece: 0.1946 - Aurc: 0.1381 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 25 | 4.4851 | 0.06 | 0.9565 | 13.8276 | 0.06 | 0.0556 | 0.1688 | 0.9385 | | No log | 2.0 | 50 | 3.5619 | 0.3775 | 0.7827 | 6.2649 | 0.3775 | 0.3611 | 0.2331 | 0.3882 | | No log | 3.0 | 75 | 2.8990 | 0.5025 | 0.6453 | 4.7376 | 0.5025 | 0.4858 | 0.1689 | 0.2658 | | No log | 4.0 | 100 | 2.5972 | 0.515 | 0.5980 | 4.4210 | 0.515 | 0.4895 | 0.1605 | 0.2249 | | No log | 5.0 | 125 | 2.4353 | 0.56 | 0.5762 | 3.4885 | 0.56 | 0.5566 | 0.1548 | 0.2100 | | No log | 6.0 | 150 | 2.4157 | 0.5475 | 0.5864 | 3.8261 | 0.5475 | 0.5323 | 0.1837 | 0.2167 | | No log | 7.0 | 175 | 2.1786 | 0.6075 | 0.5203 | 3.4565 | 0.6075 | 0.6103 | 0.1403 | 0.1670 | | No log | 8.0 | 200 | 2.1082 | 0.63 | 0.5040 | 3.3570 | 0.63 | 0.6246 | 0.1580 | 0.1530 | | No log | 9.0 | 225 | 2.0472 | 0.625 | 0.5042 | 3.8572 | 0.625 | 0.6184 | 0.1552 | 0.1530 | | No log | 10.0 | 250 | 2.0589 | 0.6025 | 0.5468 | 3.5723 | 0.6025 | 0.5982 | 0.1781 | 0.1785 | | No log | 11.0 | 275 | 1.8965 | 0.65 | 0.4755 | 3.4466 | 0.65 | 0.6497 | 0.1605 | 0.1475 | | No log | 12.0 | 300 | 1.9014 | 0.6325 | 0.5066 | 3.0881 | 0.6325 | 0.6359 | 0.1658 | 0.1591 | | No log | 13.0 | 325 | 1.7904 | 0.6175 | 0.5162 | 3.4673 | 0.6175 | 0.6141 | 0.1525 | 0.1598 | | No log | 14.0 | 350 | 1.8624 | 0.625 | 0.5173 | 3.6824 | 0.625 | 0.6179 | 0.1567 | 0.1624 | | No log | 15.0 | 375 | 1.7083 | 0.6625 | 0.4817 | 3.1296 | 0.6625 | 0.6686 | 0.1651 | 0.1405 | | No log | 16.0 | 400 | 1.8848 | 0.59 | 0.5478 | 4.3761 | 0.59 | 0.5913 | 0.2083 | 0.1696 | | No log | 17.0 | 425 | 1.7238 | 0.6125 | 0.5229 | 3.1232 | 0.6125 | 0.6052 | 0.1833 | 0.1553 | | No log | 18.0 | 450 | 1.7126 | 0.625 | 0.5152 | 2.9267 | 0.625 | 0.6284 | 0.1747 | 0.1565 | | No log | 19.0 | 475 | 1.6459 | 0.6275 | 0.5024 | 2.9078 | 0.6275 | 0.6219 | 0.1766 | 0.1527 | | 1.0542 | 20.0 | 500 | 1.6029 | 0.6275 | 0.4855 | 3.0931 | 0.6275 | 0.6316 | 0.1720 | 0.1414 | | 1.0542 | 21.0 | 525 | 1.6566 | 0.6525 | 0.4847 | 3.0998 | 0.6525 | 0.6479 | 0.1558 | 0.1438 | | 1.0542 | 22.0 | 550 | 1.6169 | 0.645 | 0.4894 | 3.0081 | 0.645 | 0.6471 | 0.1687 | 0.1400 | | 1.0542 | 23.0 | 575 | 1.5322 | 0.6525 | 0.4557 | 3.3587 | 0.6525 | 0.6520 | 0.1428 | 0.1247 | | 1.0542 | 24.0 | 600 | 1.5991 | 0.6475 | 0.4787 | 2.9349 | 0.6475 | 0.6444 | 0.1580 | 0.1450 | | 1.0542 | 25.0 | 625 | 1.5625 | 0.6375 | 0.4926 | 3.0245 | 0.6375 | 0.6378 | 0.1641 | 0.1433 | | 1.0542 | 26.0 | 650 | 1.5366 | 0.64 | 0.4884 | 3.3388 | 0.64 | 0.6461 | 0.1595 | 0.1453 | | 1.0542 | 27.0 | 675 | 1.5686 | 0.65 | 0.4765 | 3.5120 | 0.65 | 0.6504 | 0.1625 | 0.1359 | | 1.0542 | 28.0 | 700 | 1.5562 | 0.6475 | 0.4817 | 3.0348 | 0.6475 | 0.6488 | 0.1459 | 0.1388 | | 1.0542 | 29.0 | 725 | 1.5213 | 0.6475 | 0.4719 | 3.2628 | 0.6475 | 0.6475 | 0.1634 | 0.1326 | | 1.0542 | 30.0 | 750 | 1.5492 | 0.6675 | 0.4730 | 3.1693 | 0.6675 | 0.6679 | 0.1469 | 0.1415 | | 1.0542 | 31.0 | 775 | 1.5311 | 0.65 | 0.4896 | 3.0881 | 0.65 | 0.6504 | 0.1815 | 0.1380 | | 1.0542 | 32.0 | 800 | 1.5556 | 0.6475 | 0.4821 | 3.1829 | 0.6475 | 0.6491 | 0.1640 | 0.1405 | | 1.0542 | 33.0 | 825 | 1.5471 | 0.6375 | 0.4846 | 3.4190 | 0.6375 | 0.6407 | 0.1628 | 0.1415 | | 1.0542 | 34.0 | 850 | 1.4809 | 0.6575 | 0.4714 | 2.9136 | 0.6575 | 0.6612 | 0.1729 | 0.1338 | | 1.0542 | 35.0 | 875 | 1.5256 | 0.66 | 0.4773 | 3.2303 | 0.66 | 0.6650 | 0.1746 | 0.1368 | | 1.0542 | 36.0 | 900 | 1.4929 | 0.6675 | 0.4671 | 3.2360 | 0.6675 | 0.6698 | 0.1698 | 0.1309 | | 1.0542 | 37.0 | 925 | 1.4923 | 0.645 | 0.4880 | 3.0567 | 0.645 | 0.6564 | 0.1764 | 0.1395 | | 1.0542 | 38.0 | 950 | 1.5038 | 0.665 | 0.4672 | 3.2116 | 0.665 | 0.6661 | 0.1588 | 0.1343 | | 1.0542 | 39.0 | 975 | 1.4708 | 0.6625 | 0.4669 | 3.1420 | 0.6625 | 0.6675 | 0.1683 | 0.1301 | | 0.0522 | 40.0 | 1000 | 1.5153 | 0.6475 | 0.4865 | 3.1796 | 0.6475 | 0.6447 | 0.1639 | 0.1400 | | 0.0522 | 41.0 | 1025 | 1.4705 | 0.6575 | 0.4642 | 3.2196 | 0.6575 | 0.6626 | 0.1440 | 0.1308 | | 0.0522 | 42.0 | 1050 | 1.4844 | 0.6575 | 0.4722 | 3.2445 | 0.6575 | 0.6595 | 0.1746 | 0.1328 | | 0.0522 | 43.0 | 1075 | 1.4957 | 0.6425 | 0.4828 | 3.1456 | 0.6425 | 0.6468 | 0.1499 | 0.1417 | | 0.0522 | 44.0 | 1100 | 1.5179 | 0.645 | 0.4910 | 3.3921 | 0.645 | 0.6470 | 0.1861 | 0.1433 | | 0.0522 | 45.0 | 1125 | 1.4878 | 0.6425 | 0.4839 | 3.2139 | 0.6425 | 0.6478 | 0.1720 | 0.1403 | | 0.0522 | 46.0 | 1150 | 1.4666 | 0.655 | 0.4741 | 2.9333 | 0.655 | 0.6601 | 0.1813 | 0.1347 | | 0.0522 | 47.0 | 1175 | 1.4954 | 0.6575 | 0.4776 | 3.2102 | 0.6575 | 0.6604 | 0.1842 | 0.1390 | | 0.0522 | 48.0 | 1200 | 1.4976 | 0.645 | 0.4856 | 3.1539 | 0.645 | 0.6493 | 0.1549 | 0.1407 | | 0.0522 | 49.0 | 1225 | 1.4772 | 0.64 | 0.4780 | 2.9845 | 0.64 | 0.6445 | 0.1826 | 0.1388 | | 0.0522 | 50.0 | 1250 | 1.4584 | 0.65 | 0.4703 | 3.0776 | 0.65 | 0.6533 | 0.1685 | 0.1352 | | 0.0522 | 51.0 | 1275 | 1.4828 | 0.6325 | 0.4844 | 3.1425 | 0.6325 | 0.6377 | 0.1641 | 0.1409 | | 0.0522 | 52.0 | 1300 | 1.4676 | 0.6525 | 0.4737 | 3.1483 | 0.6525 | 0.6565 | 0.1773 | 0.1358 | | 0.0522 | 53.0 | 1325 | 1.4675 | 0.6475 | 0.4791 | 3.1411 | 0.6475 | 0.6515 | 0.1820 | 0.1388 | | 0.0522 | 54.0 | 1350 | 1.4724 | 0.645 | 0.4764 | 3.0744 | 0.645 | 0.6499 | 0.1847 | 0.1382 | | 0.0522 | 55.0 | 1375 | 1.4689 | 0.6425 | 0.4769 | 3.2256 | 0.6425 | 0.6476 | 0.1839 | 0.1376 | | 0.0522 | 56.0 | 1400 | 1.4660 | 0.6425 | 0.4760 | 2.9907 | 0.6425 | 0.6479 | 0.1906 | 0.1378 | | 0.0522 | 57.0 | 1425 | 1.4663 | 0.645 | 0.4757 | 3.0722 | 0.645 | 0.6514 | 0.1705 | 0.1367 | | 0.0522 | 58.0 | 1450 | 1.4678 | 0.65 | 0.4770 | 3.0710 | 0.65 | 0.6546 | 0.1794 | 0.1371 | | 0.0522 | 59.0 | 1475 | 1.4717 | 0.64 | 0.4786 | 3.0737 | 0.64 | 0.6455 | 0.1889 | 0.1392 | | 0.0064 | 60.0 | 1500 | 1.4691 | 0.645 | 0.4768 | 3.0688 | 0.645 | 0.6499 | 0.1815 | 0.1378 | | 0.0064 | 61.0 | 1525 | 1.4689 | 0.64 | 0.4767 | 3.0688 | 0.64 | 0.6452 | 0.1846 | 0.1382 | | 0.0064 | 62.0 | 1550 | 1.4689 | 0.64 | 0.4770 | 3.0674 | 0.64 | 0.6455 | 0.1937 | 0.1383 | | 0.0064 | 63.0 | 1575 | 1.4687 | 0.6425 | 0.4767 | 3.0700 | 0.6425 | 0.6485 | 0.1897 | 0.1381 | | 0.0064 | 64.0 | 1600 | 1.4674 | 0.6425 | 0.4764 | 3.0675 | 0.6425 | 0.6472 | 0.1855 | 0.1375 | | 0.0064 | 65.0 | 1625 | 1.4681 | 0.6425 | 0.4766 | 3.0694 | 0.6425 | 0.6485 | 0.1917 | 0.1381 | | 0.0064 | 66.0 | 1650 | 1.4681 | 0.6425 | 0.4766 | 3.0687 | 0.6425 | 0.6472 | 0.1905 | 0.1378 | | 0.0064 | 67.0 | 1675 | 1.4667 | 0.645 | 0.4757 | 3.0681 | 0.645 | 0.6505 | 0.1899 | 0.1375 | | 0.0064 | 68.0 | 1700 | 1.4683 | 0.6425 | 0.4771 | 3.0686 | 0.6425 | 0.6474 | 0.1871 | 0.1379 | | 0.0064 | 69.0 | 1725 | 1.4672 | 0.64 | 0.4760 | 3.0679 | 0.64 | 0.6455 | 0.1932 | 0.1380 | | 0.0064 | 70.0 | 1750 | 1.4673 | 0.6425 | 0.4763 | 3.0683 | 0.6425 | 0.6474 | 0.1955 | 0.1376 | | 0.0064 | 71.0 | 1775 | 1.4676 | 0.645 | 0.4763 | 3.0680 | 0.645 | 0.6505 | 0.1921 | 0.1376 | | 0.0064 | 72.0 | 1800 | 1.4674 | 0.6425 | 0.4763 | 3.0683 | 0.6425 | 0.6474 | 0.1946 | 0.1376 | | 0.0064 | 73.0 | 1825 | 1.4675 | 0.6425 | 0.4763 | 3.0682 | 0.6425 | 0.6474 | 0.1946 | 0.1377 | | 0.0064 | 74.0 | 1850 | 1.4674 | 0.6425 | 0.4763 | 3.0682 | 0.6425 | 0.6485 | 0.1945 | 0.1380 | | 0.0064 | 75.0 | 1875 | 1.4674 | 0.64 | 0.4763 | 3.0680 | 0.64 | 0.6455 | 0.1960 | 0.1380 | | 0.0064 | 76.0 | 1900 | 1.4675 | 0.64 | 0.4764 | 3.0682 | 0.64 | 0.6455 | 0.1972 | 0.1381 | | 0.0064 | 77.0 | 1925 | 1.4675 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1947 | 0.1380 | | 0.0064 | 78.0 | 1950 | 1.4674 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1381 | | 0.0064 | 79.0 | 1975 | 1.4674 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6474 | 0.1935 | 0.1376 | | 0.0 | 80.0 | 2000 | 1.4673 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1380 | | 0.0 | 81.0 | 2025 | 1.4674 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1946 | 0.1380 | | 0.0 | 82.0 | 2050 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1935 | 0.1380 | | 0.0 | 83.0 | 2075 | 1.4674 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 | | 0.0 | 84.0 | 2100 | 1.4674 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1381 | | 0.0 | 85.0 | 2125 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 | | 0.0 | 86.0 | 2150 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 | | 0.0 | 87.0 | 2175 | 1.4673 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1381 | | 0.0 | 88.0 | 2200 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 | | 0.0 | 89.0 | 2225 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 | | 0.0 | 90.0 | 2250 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 | | 0.0 | 91.0 | 2275 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 | | 0.0 | 92.0 | 2300 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 | | 0.0 | 93.0 | 2325 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 | | 0.0 | 94.0 | 2350 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1909 | 0.1381 | | 0.0 | 95.0 | 2375 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 | | 0.0 | 96.0 | 2400 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 | | 0.0 | 97.0 | 2425 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 | | 0.0 | 98.0 | 2450 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 | | 0.0 | 99.0 | 2475 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 | | 0.0 | 100.0 | 2500 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 | ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.12.0 - Tokenizers 0.12.1
Raizel123/Novalora
Raizel123
2023-07-10T22:26:20Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-10T22:23:44Z
--- license: creativeml-openrail-m ---
Renan1986-12/distilbert-base-uncased-finetuned-cola
Renan1986-12
2023-07-10T22:24:22Z
62
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-10T22:13:06Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Renan1986-12/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Renan1986-12/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1680 - Validation Loss: 0.5433 - Train Matthews Correlation: 0.5570 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.4525 | 0.4571 | 0.4871 | 0 | | 0.2752 | 0.4495 | 0.5614 | 1 | | 0.1680 | 0.5433 | 0.5570 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
severinsimmler/xlm-roberta-longformer-base-16384
severinsimmler
2023-07-10T22:15:52Z
2,437
25
transformers
[ "transformers", "pytorch", "safetensors", "longformer", "feature-extraction", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:2004.05150", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2023-04-20T15:41:44Z
--- model-index: - name: xlm-roberta-longformer-base-16384 results: [] license: mit language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh --- # xlm-roberta-longformer-base-16384 ⚠️ This is just the PyTorch version of [`hyperonym/xlm-roberta-longformer-base-16384`](https://huggingface.co/hyperonym/xlm-roberta-longformer-base-16384) without any modifications. **xlm-roberta-longformer** is a multilingual [Longformer](https://arxiv.org/abs/2004.05150) initialized with [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base)'s weights without further pretraining. It is intended to be fine-tuned on a downstream task. The notebook for replicating the model is available on GitHub: https://github.com/hyperonym/dirge/blob/master/models/xlm-roberta-longformer/convert.ipynb
NasimB/gpt2-dp-mod-datasets-txt-processing-rarity-all
NasimB
2023-07-10T22:14:46Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-10T19:52:59Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-dp-mod-datasets-txt-processing-rarity-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-dp-mod-datasets-txt-processing-rarity-all This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.4242 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7606 | 0.29 | 500 | 5.6933 | | 5.4375 | 0.59 | 1000 | 5.2559 | | 5.0937 | 0.88 | 1500 | 5.0171 | | 4.8204 | 1.18 | 2000 | 4.8701 | | 4.6728 | 1.47 | 2500 | 4.7593 | | 4.574 | 1.77 | 3000 | 4.6587 | | 4.4456 | 2.06 | 3500 | 4.5885 | | 4.258 | 2.36 | 4000 | 4.5468 | | 4.2423 | 2.65 | 4500 | 4.4860 | | 4.2036 | 2.94 | 5000 | 4.4302 | | 3.9737 | 3.24 | 5500 | 4.4364 | | 3.9439 | 3.53 | 6000 | 4.4019 | | 3.9271 | 3.83 | 6500 | 4.3632 | | 3.7901 | 4.12 | 7000 | 4.3689 | | 3.6474 | 4.42 | 7500 | 4.3662 | | 3.6414 | 4.71 | 8000 | 4.3472 | | 3.6338 | 5.01 | 8500 | 4.3344 | | 3.3764 | 5.3 | 9000 | 4.3618 | | 3.3821 | 5.59 | 9500 | 4.3568 | | 3.3777 | 5.89 | 10000 | 4.3513 | | 3.2752 | 6.18 | 10500 | 4.3602 | | 3.2228 | 6.48 | 11000 | 4.3652 | | 3.2172 | 6.77 | 11500 | 4.3656 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
MnLgt/swivel_inversion
MnLgt
2023-07-10T22:11:42Z
0
0
null
[ "license:mit", "region:us" ]
null
2023-07-10T22:11:41Z
--- license: mit --- ### swivel_inversion on Stable Diffusion This is the `<swivel-chair>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<swivel-chair> 0](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/942493f105b42e65e9bbb2afb8fd24ee.jpg) ![<swivel-chair> 1](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/25a60f0b7e1df4480da0096f4855d3cd.jpg) ![<swivel-chair> 2](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/099ba1cdff4a7d6d76437ec3b9d48743.jpg) ![<swivel-chair> 3](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/a95a8487c048359027c5dc1f2f4231cd.jpg) ![<swivel-chair> 4](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/e2258228f0c125fc4f0d2b3c27c4b5b5.jpg) ![<swivel-chair> 5](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/7692300d5457b1ad0b9b77bb4370a7b5.jpg) ![<swivel-chair> 6](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/3b7fb905cd512af41d664db5b5c9c489.jpg) ![<swivel-chair> 7](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/89803df4905f81d2c1f70a1a7faf68fd.jpg) ![<swivel-chair> 8](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/570e6f83c4b0cd052893aee8e7030521.jpg) ![<swivel-chair> 9](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/465e22dc7bbfd6f42a803e8ab35c0609.jpg) ![<swivel-chair> 10](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/a6490c193d2815bd520a2478fcdb543f.jpg) ![<swivel-chair> 11](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/c789ecc814b35df75187611633dbd84a.jpg) ![<swivel-chair> 12](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/11ca822a037aa86f3316564ac212ac1c.jpg) ![<swivel-chair> 13](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/5dea3e2b3148e21a4cb4dfe8dea7af08.jpg) ![<swivel-chair> 14](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/46b89058d8c2342f3c152b50039cb0c9.jpg) ![<swivel-chair> 15](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/749bf590f4dd9f4c3ef1ffd58e7db3e8.jpg) ![<swivel-chair> 16](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/6e32521663ac1cd6d0999e4a09dbf5a1.jpg) ![<swivel-chair> 17](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/bd3580a999d8ec073f2e9e7584fb1479.jpg) ![<swivel-chair> 18](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/fa116ff22175ba831f641af9bc1b44c8.jpg) ![<swivel-chair> 19](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/f32510be7c6e3d2d540d53ef0c0b5536.jpg) ![<swivel-chair> 20](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/3aab159b96dbcc1d403eeeea81191fb2.jpg) ![<swivel-chair> 21](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/6405e9bae158bf99eab384a36468c0cc.jpg) ![<swivel-chair> 22](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/e727d064c7c19b510acaacb2637c195e.jpg) ![<swivel-chair> 23](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/ebf0269fca456ea8e2e307f6de6222ef.jpg) ![<swivel-chair> 24](https://huggingface.co/jordandavis/swivel_inversion/resolve/main/concept_images/ac4a58c646a756d07608c485bbe7fa45.jpg)
TheBloke/airochronos-33B-GGML
TheBloke
2023-07-10T22:07:18Z
0
18
null
[ "license:other", "region:us" ]
null
2023-07-10T21:14:18Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Henk717's Airochronos 33B GGML These files are GGML format model files for [Henk717's Airochronos 33B](https://huggingface.co/Henk717/airochronos-33B). GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend. * [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend. * [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airochronos-33B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airochronos-33B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Henk717/airochronos-33B) ## Prompt template: Alpaca ```Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- compatibility_ggml start --> ## Compatibility ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods. ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K` These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`. They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation. ## Explanation of the new k-quant methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | airochronos-33b.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB| 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | airochronos-33b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB| 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | airochronos-33b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB| 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | airochronos-33b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB| 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | airochronos-33b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB| 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | airochronos-33b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB| 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | airochronos-33b.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB| 20.80 GB | Original quant method, 4-bit. | | airochronos-33b.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB| 22.83 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | airochronos-33b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB| 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | airochronos-33b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB| 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | airochronos-33b.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB| 24.87 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | airochronos-33b.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB| 26.90 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | airochronos-33b.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB| 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | | airochronos-33b.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB| 37.06 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `llama.cpp` I use the following command line; adjust for your tastes and needs: ``` ./main -t 10 -ngl 32 -m airochronos-33b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:" ``` Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Henk717's Airochronos 33B After the initial experiment with chronoboros-33B it was evident that the merge was to unpredictable to be useful, testing the individual models it became clear that the bias should be weighted towards Chronos. This is the new release of the merge with 75% chronos 33B, and 25% airoboros-1.4 33B. Model has been tested with the Alpaca prompting format combined with KoboldAI Lite's instruct and chat modes, as well as regular story writing. It has also been tested on basic reasoning tasks, but has not seen much testing for factual information.
nolanaatama/ptrgrffnfmlygyrvcv2300pchrhys
nolanaatama
2023-07-10T22:06:37Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-10T22:03:56Z
--- license: creativeml-openrail-m ---
jordyvl/vit-small_tobacco3482_kd_CEKD_t5.0_a0.7
jordyvl
2023-07-10T21:59:33Z
161
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-10T21:19:56Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-small_tobacco3482_kd_CEKD_t5.0_a0.7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-small_tobacco3482_kd_CEKD_t5.0_a0.7 This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4918 - Accuracy: 0.85 - Brier Loss: 0.2583 - Nll: 1.0894 - F1 Micro: 0.85 - F1 Macro: 0.8374 - Ece: 0.1917 - Aurc: 0.0470 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 7 | 1.8329 | 0.225 | 0.8761 | 5.2731 | 0.225 | 0.1384 | 0.2607 | 0.6977 | | No log | 2.0 | 14 | 1.4785 | 0.405 | 0.7460 | 3.4067 | 0.405 | 0.2289 | 0.3097 | 0.4085 | | No log | 3.0 | 21 | 1.0406 | 0.6 | 0.5725 | 1.8722 | 0.6 | 0.5345 | 0.3050 | 0.2010 | | No log | 4.0 | 28 | 0.8087 | 0.725 | 0.4192 | 1.6096 | 0.7250 | 0.6767 | 0.2345 | 0.1149 | | No log | 5.0 | 35 | 0.7666 | 0.735 | 0.3731 | 1.6189 | 0.735 | 0.7350 | 0.2377 | 0.1011 | | No log | 6.0 | 42 | 0.6960 | 0.78 | 0.3413 | 1.5230 | 0.78 | 0.7592 | 0.2295 | 0.0868 | | No log | 7.0 | 49 | 0.6490 | 0.805 | 0.3110 | 1.4861 | 0.805 | 0.7864 | 0.2138 | 0.0785 | | No log | 8.0 | 56 | 0.6238 | 0.795 | 0.3069 | 1.2098 | 0.795 | 0.7816 | 0.2065 | 0.0698 | | No log | 9.0 | 63 | 0.5755 | 0.83 | 0.2866 | 1.1943 | 0.83 | 0.8117 | 0.1937 | 0.0694 | | No log | 10.0 | 70 | 0.6360 | 0.77 | 0.3164 | 1.2608 | 0.7700 | 0.7550 | 0.1785 | 0.0677 | | No log | 11.0 | 77 | 0.6548 | 0.785 | 0.3335 | 1.4895 | 0.785 | 0.7707 | 0.2281 | 0.0885 | | No log | 12.0 | 84 | 0.5847 | 0.805 | 0.3002 | 1.4317 | 0.805 | 0.7807 | 0.2264 | 0.0756 | | No log | 13.0 | 91 | 0.5956 | 0.81 | 0.3040 | 1.2590 | 0.81 | 0.7928 | 0.2241 | 0.0556 | | No log | 14.0 | 98 | 0.5692 | 0.81 | 0.3025 | 1.2119 | 0.81 | 0.8043 | 0.2235 | 0.0665 | | No log | 15.0 | 105 | 0.5223 | 0.83 | 0.2762 | 1.1162 | 0.83 | 0.8221 | 0.1798 | 0.0552 | | No log | 16.0 | 112 | 0.4981 | 0.84 | 0.2523 | 1.0864 | 0.8400 | 0.8372 | 0.1868 | 0.0396 | | No log | 17.0 | 119 | 0.5207 | 0.805 | 0.2741 | 1.0416 | 0.805 | 0.7897 | 0.1960 | 0.0551 | | No log | 18.0 | 126 | 0.5165 | 0.84 | 0.2723 | 1.1596 | 0.8400 | 0.8325 | 0.1942 | 0.0506 | | No log | 19.0 | 133 | 0.4979 | 0.845 | 0.2573 | 1.2329 | 0.845 | 0.8297 | 0.1825 | 0.0444 | | No log | 20.0 | 140 | 0.4953 | 0.855 | 0.2565 | 1.1213 | 0.855 | 0.8442 | 0.1844 | 0.0474 | | No log | 21.0 | 147 | 0.5296 | 0.82 | 0.2792 | 1.0000 | 0.82 | 0.8218 | 0.1768 | 0.0523 | | No log | 22.0 | 154 | 0.5027 | 0.835 | 0.2625 | 0.9926 | 0.835 | 0.8238 | 0.2035 | 0.0481 | | No log | 23.0 | 161 | 0.5027 | 0.84 | 0.2642 | 1.0500 | 0.8400 | 0.8299 | 0.1616 | 0.0482 | | No log | 24.0 | 168 | 0.5017 | 0.84 | 0.2616 | 1.0560 | 0.8400 | 0.8314 | 0.1819 | 0.0497 | | No log | 25.0 | 175 | 0.4942 | 0.85 | 0.2594 | 1.1003 | 0.85 | 0.8407 | 0.1793 | 0.0483 | | No log | 26.0 | 182 | 0.4943 | 0.83 | 0.2586 | 1.0436 | 0.83 | 0.8140 | 0.1869 | 0.0518 | | No log | 27.0 | 189 | 0.4950 | 0.835 | 0.2613 | 1.0817 | 0.835 | 0.8224 | 0.2039 | 0.0504 | | No log | 28.0 | 196 | 0.4957 | 0.85 | 0.2599 | 1.1109 | 0.85 | 0.8309 | 0.2058 | 0.0485 | | No log | 29.0 | 203 | 0.4956 | 0.845 | 0.2599 | 1.0914 | 0.845 | 0.8304 | 0.1916 | 0.0492 | | No log | 30.0 | 210 | 0.4893 | 0.84 | 0.2561 | 1.0890 | 0.8400 | 0.8214 | 0.2071 | 0.0482 | | No log | 31.0 | 217 | 0.4920 | 0.835 | 0.2587 | 1.0907 | 0.835 | 0.8270 | 0.2031 | 0.0482 | | No log | 32.0 | 224 | 0.4927 | 0.83 | 0.2601 | 1.0879 | 0.83 | 0.8157 | 0.2093 | 0.0500 | | No log | 33.0 | 231 | 0.4925 | 0.835 | 0.2593 | 1.0886 | 0.835 | 0.8270 | 0.1810 | 0.0484 | | No log | 34.0 | 238 | 0.4909 | 0.845 | 0.2578 | 1.0871 | 0.845 | 0.8304 | 0.1916 | 0.0478 | | No log | 35.0 | 245 | 0.4927 | 0.845 | 0.2591 | 1.0866 | 0.845 | 0.8378 | 0.1943 | 0.0473 | | No log | 36.0 | 252 | 0.4919 | 0.85 | 0.2581 | 1.0891 | 0.85 | 0.8342 | 0.2193 | 0.0475 | | No log | 37.0 | 259 | 0.4908 | 0.845 | 0.2579 | 1.0867 | 0.845 | 0.8346 | 0.2215 | 0.0474 | | No log | 38.0 | 266 | 0.4929 | 0.85 | 0.2590 | 1.0873 | 0.85 | 0.8407 | 0.1884 | 0.0471 | | No log | 39.0 | 273 | 0.4913 | 0.85 | 0.2584 | 1.0861 | 0.85 | 0.8374 | 0.1944 | 0.0474 | | No log | 40.0 | 280 | 0.4933 | 0.835 | 0.2595 | 1.0871 | 0.835 | 0.8248 | 0.1893 | 0.0491 | | No log | 41.0 | 287 | 0.4936 | 0.84 | 0.2599 | 1.0863 | 0.8400 | 0.8276 | 0.1860 | 0.0486 | | No log | 42.0 | 294 | 0.4911 | 0.85 | 0.2580 | 1.0861 | 0.85 | 0.8374 | 0.2186 | 0.0474 | | No log | 43.0 | 301 | 0.4915 | 0.85 | 0.2581 | 1.0860 | 0.85 | 0.8374 | 0.2023 | 0.0475 | | No log | 44.0 | 308 | 0.4921 | 0.85 | 0.2586 | 1.0874 | 0.85 | 0.8374 | 0.2013 | 0.0477 | | No log | 45.0 | 315 | 0.4915 | 0.85 | 0.2583 | 1.0862 | 0.85 | 0.8374 | 0.1941 | 0.0475 | | No log | 46.0 | 322 | 0.4918 | 0.85 | 0.2584 | 1.0878 | 0.85 | 0.8374 | 0.1852 | 0.0473 | | No log | 47.0 | 329 | 0.4916 | 0.85 | 0.2583 | 1.0873 | 0.85 | 0.8374 | 0.2089 | 0.0473 | | No log | 48.0 | 336 | 0.4921 | 0.85 | 0.2586 | 1.0879 | 0.85 | 0.8374 | 0.2026 | 0.0477 | | No log | 49.0 | 343 | 0.4918 | 0.845 | 0.2584 | 1.0884 | 0.845 | 0.8282 | 0.1963 | 0.0478 | | No log | 50.0 | 350 | 0.4922 | 0.85 | 0.2587 | 1.0871 | 0.85 | 0.8374 | 0.2102 | 0.0474 | | No log | 51.0 | 357 | 0.4920 | 0.85 | 0.2585 | 1.0879 | 0.85 | 0.8374 | 0.2095 | 0.0474 | | No log | 52.0 | 364 | 0.4926 | 0.85 | 0.2589 | 1.0878 | 0.85 | 0.8374 | 0.2022 | 0.0477 | | No log | 53.0 | 371 | 0.4920 | 0.85 | 0.2586 | 1.0888 | 0.85 | 0.8374 | 0.2027 | 0.0475 | | No log | 54.0 | 378 | 0.4921 | 0.85 | 0.2586 | 1.0886 | 0.85 | 0.8374 | 0.2020 | 0.0474 | | No log | 55.0 | 385 | 0.4921 | 0.85 | 0.2587 | 1.0890 | 0.85 | 0.8374 | 0.1929 | 0.0471 | | No log | 56.0 | 392 | 0.4925 | 0.85 | 0.2589 | 1.0881 | 0.85 | 0.8374 | 0.1946 | 0.0473 | | No log | 57.0 | 399 | 0.4917 | 0.85 | 0.2583 | 1.0893 | 0.85 | 0.8374 | 0.1932 | 0.0472 | | No log | 58.0 | 406 | 0.4921 | 0.85 | 0.2586 | 1.0877 | 0.85 | 0.8374 | 0.1948 | 0.0476 | | No log | 59.0 | 413 | 0.4917 | 0.85 | 0.2583 | 1.0883 | 0.85 | 0.8374 | 0.1931 | 0.0472 | | No log | 60.0 | 420 | 0.4918 | 0.85 | 0.2583 | 1.0882 | 0.85 | 0.8374 | 0.1945 | 0.0475 | | No log | 61.0 | 427 | 0.4916 | 0.85 | 0.2582 | 1.0883 | 0.85 | 0.8374 | 0.1936 | 0.0472 | | No log | 62.0 | 434 | 0.4920 | 0.85 | 0.2586 | 1.0882 | 0.85 | 0.8374 | 0.1942 | 0.0473 | | No log | 63.0 | 441 | 0.4922 | 0.85 | 0.2587 | 1.0889 | 0.85 | 0.8374 | 0.1935 | 0.0473 | | No log | 64.0 | 448 | 0.4921 | 0.85 | 0.2586 | 1.0885 | 0.85 | 0.8374 | 0.1848 | 0.0473 | | No log | 65.0 | 455 | 0.4916 | 0.85 | 0.2582 | 1.0887 | 0.85 | 0.8374 | 0.1848 | 0.0474 | | No log | 66.0 | 462 | 0.4917 | 0.85 | 0.2583 | 1.0883 | 0.85 | 0.8374 | 0.1849 | 0.0472 | | No log | 67.0 | 469 | 0.4917 | 0.85 | 0.2584 | 1.0887 | 0.85 | 0.8374 | 0.1848 | 0.0472 | | No log | 68.0 | 476 | 0.4920 | 0.85 | 0.2585 | 1.0888 | 0.85 | 0.8374 | 0.2011 | 0.0471 | | No log | 69.0 | 483 | 0.4918 | 0.85 | 0.2584 | 1.0889 | 0.85 | 0.8374 | 0.2007 | 0.0471 | | No log | 70.0 | 490 | 0.4919 | 0.85 | 0.2584 | 1.0886 | 0.85 | 0.8374 | 0.1848 | 0.0474 | | No log | 71.0 | 497 | 0.4920 | 0.85 | 0.2585 | 1.0888 | 0.85 | 0.8374 | 0.1940 | 0.0474 | | 0.1824 | 72.0 | 504 | 0.4919 | 0.85 | 0.2584 | 1.0889 | 0.85 | 0.8374 | 0.2011 | 0.0471 | | 0.1824 | 73.0 | 511 | 0.4917 | 0.85 | 0.2583 | 1.0887 | 0.85 | 0.8374 | 0.1848 | 0.0472 | | 0.1824 | 74.0 | 518 | 0.4920 | 0.85 | 0.2585 | 1.0890 | 0.85 | 0.8374 | 0.1848 | 0.0472 | | 0.1824 | 75.0 | 525 | 0.4920 | 0.85 | 0.2585 | 1.0892 | 0.85 | 0.8374 | 0.1846 | 0.0472 | | 0.1824 | 76.0 | 532 | 0.4918 | 0.85 | 0.2583 | 1.0889 | 0.85 | 0.8374 | 0.1930 | 0.0472 | | 0.1824 | 77.0 | 539 | 0.4917 | 0.85 | 0.2582 | 1.0891 | 0.85 | 0.8374 | 0.2005 | 0.0472 | | 0.1824 | 78.0 | 546 | 0.4919 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1928 | 0.0472 | | 0.1824 | 79.0 | 553 | 0.4920 | 0.85 | 0.2585 | 1.0893 | 0.85 | 0.8374 | 0.1845 | 0.0473 | | 0.1824 | 80.0 | 560 | 0.4919 | 0.85 | 0.2584 | 1.0890 | 0.85 | 0.8374 | 0.1929 | 0.0473 | | 0.1824 | 81.0 | 567 | 0.4920 | 0.85 | 0.2585 | 1.0892 | 0.85 | 0.8374 | 0.1925 | 0.0471 | | 0.1824 | 82.0 | 574 | 0.4920 | 0.85 | 0.2585 | 1.0895 | 0.85 | 0.8374 | 0.1844 | 0.0471 | | 0.1824 | 83.0 | 581 | 0.4919 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1916 | 0.0471 | | 0.1824 | 84.0 | 588 | 0.4918 | 0.85 | 0.2584 | 1.0890 | 0.85 | 0.8374 | 0.1926 | 0.0471 | | 0.1824 | 85.0 | 595 | 0.4918 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1844 | 0.0471 | | 0.1824 | 86.0 | 602 | 0.4918 | 0.85 | 0.2584 | 1.0893 | 0.85 | 0.8374 | 0.1927 | 0.0472 | | 0.1824 | 87.0 | 609 | 0.4918 | 0.85 | 0.2584 | 1.0895 | 0.85 | 0.8374 | 0.1844 | 0.0471 | | 0.1824 | 88.0 | 616 | 0.4918 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1844 | 0.0471 | | 0.1824 | 89.0 | 623 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1917 | 0.0471 | | 0.1824 | 90.0 | 630 | 0.4919 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1998 | 0.0471 | | 0.1824 | 91.0 | 637 | 0.4919 | 0.85 | 0.2584 | 1.0894 | 0.85 | 0.8374 | 0.1916 | 0.0471 | | 0.1824 | 92.0 | 644 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1917 | 0.0470 | | 0.1824 | 93.0 | 651 | 0.4918 | 0.85 | 0.2583 | 1.0893 | 0.85 | 0.8374 | 0.1917 | 0.0471 | | 0.1824 | 94.0 | 658 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1844 | 0.0471 | | 0.1824 | 95.0 | 665 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 | | 0.1824 | 96.0 | 672 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 | | 0.1824 | 97.0 | 679 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1916 | 0.0471 | | 0.1824 | 98.0 | 686 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1917 | 0.0470 | | 0.1824 | 99.0 | 693 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 | | 0.1824 | 100.0 | 700 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
umanlp/babelbert-ft-xlm-r
umanlp
2023-07-10T21:57:04Z
160
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2023-07-07T21:22:09Z
This model is one of the artifacts of the paper [Massively Multilingual Lexical Specialization of Multilingual Transformers](https://aclanthology.org/2023.acl-long.426/). It was obtained by fine-tuning the representations of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the dataset [babelbert-dataset](https://huggingface.co/datasets/umanlp/babelbert-dataset).
NasimB/gpt2-concat-mod-datasets-txt-processing-rarity-all
NasimB
2023-07-10T21:52:09Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-10T19:34:36Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-mod-datasets-txt-processing-rarity-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-mod-datasets-txt-processing-rarity-all This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.4313 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.6762 | 0.3 | 500 | 5.6569 | | 5.3312 | 0.6 | 1000 | 5.2263 | | 4.9972 | 0.91 | 1500 | 4.9819 | | 4.7192 | 1.21 | 2000 | 4.8383 | | 4.5828 | 1.51 | 2500 | 4.7225 | | 4.481 | 1.81 | 3000 | 4.6261 | | 4.327 | 2.12 | 3500 | 4.5756 | | 4.1937 | 2.42 | 4000 | 4.5233 | | 4.1572 | 2.72 | 4500 | 4.4631 | | 4.108 | 3.02 | 5000 | 4.4211 | | 3.8775 | 3.33 | 5500 | 4.4180 | | 3.8911 | 3.63 | 6000 | 4.3805 | | 3.8636 | 3.93 | 6500 | 4.3392 | | 3.6522 | 4.23 | 7000 | 4.3680 | | 3.6068 | 4.54 | 7500 | 4.3577 | | 3.6059 | 4.84 | 8000 | 4.3344 | | 3.4722 | 5.14 | 8500 | 4.3582 | | 3.3577 | 5.44 | 9000 | 4.3591 | | 3.3543 | 5.75 | 9500 | 4.3534 | | 3.3219 | 6.05 | 10000 | 4.3616 | | 3.1978 | 6.35 | 10500 | 4.3687 | | 3.2005 | 6.65 | 11000 | 4.3698 | | 3.1998 | 6.96 | 11500 | 4.3697 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
umanlp/babelbert-ft-mbert
umanlp
2023-07-10T21:47:23Z
160
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "dataset:umanlp/babelbert-dataset", "endpoints_compatible", "region:us" ]
feature-extraction
2023-07-07T21:26:35Z
--- datasets: - umanlp/babelbert-dataset --- This model is one of the artifacts of the paper [Massively Multilingual Lexical Specialization of Multilingual Transformers](https://aclanthology.org/2023.acl-long.426/). It was obtained by fine-tuning the representations of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the dataset [babelbert-dataset](https://huggingface.co/datasets/umanlp/babelbert-dataset).
luisagaino/distilbert-base-uncased-finetuned-cola
luisagaino
2023-07-10T21:45:18Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-10T15:12:48Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: luisagaino/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # luisagaino/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1957 - Validation Loss: 0.5520 - Train Matthews Correlation: 0.5070 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5226 | 0.4778 | 0.4379 | 0 | | 0.3316 | 0.4880 | 0.4994 | 1 | | 0.1957 | 0.5520 | 0.5070 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
bhenrym14/airoboros-7b-gpt4-1.4.1-lxctx-PI-16384-GPTQ
bhenrym14
2023-07-10T21:44:47Z
9
2
transformers
[ "transformers", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4-1.4.1", "arxiv:2306.15595", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-10T02:16:21Z
--- datasets: - jondurbin/airoboros-gpt4-1.4.1 --- Mostly untested! # RoPE Scaled QLoRA Fine-tune of Llama-7b on airoboros-gpt4-1.4.1 (GPTQ) ## Overview This is [Jon Durbin's Airoboros 7B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.4) (GPTQ Quantization) with several key modifications: - Context length extended to 16384 by RoPE Scaled Embeddings. - The Llama-7b base model is pretrained for additional 150 steps on 16384 length sequences for the pile dataset. - During the fine-tune, training sequences beyond 2048 have the target truncated to equal 2048. - Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4 - **This is a QLoRA fine-tune** Pretraining and fine-tuning each took ~8 hours on 1x RTX 6000 Ada. ## How to Use For context beyond 8192 tokens, do NOT use exllama. AutoGPTQ appears to work. REQUIRED: you'll need to patch in the appropriate RoPE scaling module. see: [replace_llama_rope_with_scaled_rope](https://github.com/bhenrym14/qlora-airoboros-longcontext/blob/main/scaledllama/llama_rope_scaled_monkey_patch-16k.py). You will need to call `replace_llama_rope_with_scaled_rope` in ooba somewhere. Calling this at the top of the training module after the imports works for me. Hopefully there is a quick fix to exllama that can make >8k work soon. Otherwise for context <8k. Use exllama. Set `max_seq_len` to 16384, and `compress_pos_emb` to 8. ## Motivation Recent advancements in extending context by RoPE scaling ([kaiokendev](https://kaiokendev.github.io/til#extending-context-to-8k) and [meta AI)](https://arxiv.org/abs/2306.15595)) demonstrate the ability to extend the context window without (total) retraining. Finetuning has shown to be necessary to properly leverage the longer context. Here I attempt to take a smaller model and extend the context to 16k tokens. This, however, proved problematic as stability suffered in the 8-10k+ range. The Meta paper demonstrated that decreasing perplexities can still be acheived at these context lengths; however, their approach involved tuning all variables on the maximum sequence length after incorporating the RoPE scaling adjustment. Here I saught to stabilize the performance at longer contexts by fine-tuning in two stages: 1. Fine-tune on 16k token sequences from the Pile dataset. This is very compute intensive. 2. Fine-tune on the airoboros dataset as I have done in prior models. The results follow. ## Relative Performance (perplexity) | Context (tokens) | bhenrym14/airo-7b-lxctx-PI-16384-fp16 | No Pretrain | airoboros-7b-gpt4-1.4 | | ---| ------- | ------ | ---| | 64 | 29.39 |32.28 | **25.90** | | 128 | 18.80 |20.43 | **15.70** | | 256 | 13.67 |14.60 | **11.33** | | 512 | 10.60 |11.20 | **8.60** | | 1024 | 8.54 | 8.945 | **7.07** | | 2048 | 7.34 | 7.71 | **6.2** | | 4096 | **6.81** | 7.25 | 48.65 | | 8192 | **6.49** | 7.07 | 1018 | | 16384 | **6.31** | 8.70 | very big | - The pretraining successfuly ameliorates the rise in perplexity between 8192 and 16284. Not only that, it outperforms it everywhere. - For contexts shorter than the original 2048, the original model has lower perplexity. This is consistent with the literature. The gap shrinks with context length, with the original becoming incoherent beyond this point. - This comparison isn't perfect. I did use the 1.4.1 dataset and the finetuning method is different (QLoRA vs full). In short, there are other potentially influential variables responsible for these performance differences. ## Quantization Quantized with AutoGPTQ (bits = 4, group_size = 128, desc_act = True). ## Prompting: See original model card below. # Original model card: Jon Durbin's Airoboros 7B GPT4 1.4 __mostly untested, use if you want, or wait for some validation__ ## Overview This is a __full__ (not qlora) fine-tune 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros This is mostly an extension of the previous gpt-4 series, with a few extras: * fixed (+ more examples of) multi-character, multi-turn conversations * coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed * more roleplay examples * jokes * riddles * all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT) This model was fine-tuned with a fork of [FastChat](https://github.com/jondurbin/FastChat) The prompt it was trained with was: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT: ``` So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). ## Usage To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors. ``` pip install git+https://github.com/jondurbin/FastChat ``` Be sure you are pulling the latest branch! Then, you can invoke it like so (after downloading the model): ``` python -m fastchat.serve.cli \ --model-path airoboros-7b-gpt4-1.4 \ --temperature 0.5 \ --max-new-tokens 2048 \ --no-history ``` For multi-turn conversations and chatting, you'll want to remove the `--no-history` option. ### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT url: https://some.web.site/123 date: 2023-06-01 ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described: ``` USER: BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ASSISTANT: ``` ### Coding You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt): ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` ### Word games / trivia ``` USER: This anagram of "silent" is what you do when paying attention and hearing someone. ASSISTANT: listen ``` ``` USER: Give me a list of 7 words that start with EN ASSISTANT: 1. Endlessly 2. Enthusiastically 3. Enormously 4. Enveloping 5. Enduring 6. Enlightening 7. Encouraging ``` ### Multiple choice ``` USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt? A) Mercury B) Venus C) Pluto D) Jupiter ASSISTANT: C, Pluto ``` ### Writing <details> <summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary> Response: ``` ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home. Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes. No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice. So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary> ``` </details> ### Jokes ``` USER: Tell me a joke about bananas. ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E. ``` ### Riddles (not great, but slightly better) ``` USER: What has a neck but no head? ASSISTANT: A bottle. ``` ### Multi-character conversations ``` USER: Characters: - Yoda, the ever clever jedi master. - Homer Simpson, you know, the goofy yellow guy. - USER/Jon - don't generate any speech or actions on behalf of Jon Rules: - be sure to use the manerisms and speaking style of the characters correctly - surround character actions with asterisks - always put quotes around words spoken by the characters - use a new line for each character's actions or speech - always prefix actions/speech with the character's name and colon - only generate text and actions on behalf of Yoda or Homer, and never anyone else Conversation will revolve around the grapes, in a local cafe with delicious coffee. Generate a single interaction between Yoda and Homer to start, with one sentence each. ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer." Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!" USER: *enters the cafe* Sorry I'm late guys! ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes." Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!" *Yoda raises an eyebrow* ``` ### Usage and License Notices All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because: - the base model is LLaMa, which has it's own special research license - the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai So, to reiterate: this model (and datasets) cannot be used commercially.
vk21/ppo-PyramidRND-unit5
vk21
2023-07-10T21:25:11Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-10T21:25:05Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: vk21/ppo-PyramidRND-unit5 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
jordyvl/vit-small_tobacco3482_kd_CEKD_t5.0_a0.5
jordyvl
2023-07-10T21:19:10Z
163
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-10T20:39:16Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-small_tobacco3482_kd_CEKD_t5.0_a0.5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-small_tobacco3482_kd_CEKD_t5.0_a0.5 This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3966 - Accuracy: 0.85 - Brier Loss: 0.2593 - Nll: 0.9223 - F1 Micro: 0.85 - F1 Macro: 0.8392 - Ece: 0.1994 - Aurc: 0.0457 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 7 | 1.5608 | 0.225 | 0.8774 | 5.2159 | 0.225 | 0.1397 | 0.2725 | 0.7038 | | No log | 2.0 | 14 | 1.2539 | 0.415 | 0.7531 | 3.2673 | 0.415 | 0.2434 | 0.3070 | 0.4078 | | No log | 3.0 | 21 | 0.9055 | 0.585 | 0.5971 | 1.9093 | 0.585 | 0.5086 | 0.3232 | 0.2172 | | No log | 4.0 | 28 | 0.7122 | 0.72 | 0.4403 | 1.7693 | 0.72 | 0.6805 | 0.3073 | 0.1228 | | No log | 5.0 | 35 | 0.6584 | 0.74 | 0.3938 | 1.5810 | 0.74 | 0.7214 | 0.2661 | 0.1075 | | No log | 6.0 | 42 | 0.5711 | 0.8 | 0.3462 | 1.4146 | 0.8000 | 0.7524 | 0.2347 | 0.0843 | | No log | 7.0 | 49 | 0.5521 | 0.8 | 0.3199 | 1.2631 | 0.8000 | 0.7867 | 0.2542 | 0.0634 | | No log | 8.0 | 56 | 0.5603 | 0.77 | 0.3381 | 1.1808 | 0.7700 | 0.7680 | 0.2316 | 0.0858 | | No log | 9.0 | 63 | 0.5209 | 0.82 | 0.3062 | 1.2891 | 0.82 | 0.7972 | 0.2405 | 0.0792 | | No log | 10.0 | 70 | 0.5705 | 0.78 | 0.3343 | 1.5183 | 0.78 | 0.7743 | 0.2264 | 0.0874 | | No log | 11.0 | 77 | 0.5137 | 0.82 | 0.3047 | 1.2987 | 0.82 | 0.8096 | 0.2420 | 0.0592 | | No log | 12.0 | 84 | 0.4664 | 0.835 | 0.2929 | 1.1529 | 0.835 | 0.8101 | 0.2291 | 0.0753 | | No log | 13.0 | 91 | 0.4772 | 0.82 | 0.2915 | 1.2078 | 0.82 | 0.8029 | 0.2131 | 0.0620 | | No log | 14.0 | 98 | 0.4553 | 0.825 | 0.2843 | 1.1312 | 0.825 | 0.8112 | 0.2196 | 0.0453 | | No log | 15.0 | 105 | 0.4574 | 0.825 | 0.2821 | 1.1234 | 0.825 | 0.8163 | 0.2241 | 0.0554 | | No log | 16.0 | 112 | 0.4873 | 0.8 | 0.3111 | 1.2248 | 0.8000 | 0.8007 | 0.1992 | 0.0657 | | No log | 17.0 | 119 | 0.4224 | 0.855 | 0.2620 | 1.1871 | 0.855 | 0.8218 | 0.2337 | 0.0479 | | No log | 18.0 | 126 | 0.4414 | 0.825 | 0.2857 | 1.0723 | 0.825 | 0.8227 | 0.2500 | 0.0517 | | No log | 19.0 | 133 | 0.4232 | 0.845 | 0.2737 | 0.9360 | 0.845 | 0.8219 | 0.2053 | 0.0543 | | No log | 20.0 | 140 | 0.4114 | 0.845 | 0.2637 | 1.0046 | 0.845 | 0.8233 | 0.2144 | 0.0460 | | No log | 21.0 | 147 | 0.4110 | 0.835 | 0.2640 | 0.9853 | 0.835 | 0.8160 | 0.2278 | 0.0466 | | No log | 22.0 | 154 | 0.4163 | 0.845 | 0.2678 | 1.1494 | 0.845 | 0.8291 | 0.2156 | 0.0458 | | No log | 23.0 | 161 | 0.4243 | 0.835 | 0.2779 | 0.9475 | 0.835 | 0.8269 | 0.2420 | 0.0554 | | No log | 24.0 | 168 | 0.4079 | 0.835 | 0.2683 | 0.9249 | 0.835 | 0.8044 | 0.2091 | 0.0532 | | No log | 25.0 | 175 | 0.4027 | 0.85 | 0.2621 | 0.9433 | 0.85 | 0.8361 | 0.2138 | 0.0530 | | No log | 26.0 | 182 | 0.3975 | 0.855 | 0.2590 | 0.9310 | 0.855 | 0.8457 | 0.1932 | 0.0487 | | No log | 27.0 | 189 | 0.4032 | 0.85 | 0.2650 | 0.9823 | 0.85 | 0.8425 | 0.2088 | 0.0528 | | No log | 28.0 | 196 | 0.4037 | 0.845 | 0.2650 | 1.0692 | 0.845 | 0.8361 | 0.2157 | 0.0496 | | No log | 29.0 | 203 | 0.4027 | 0.845 | 0.2652 | 1.0423 | 0.845 | 0.8295 | 0.1917 | 0.0502 | | No log | 30.0 | 210 | 0.3989 | 0.85 | 0.2610 | 1.0633 | 0.85 | 0.8392 | 0.2214 | 0.0482 | | No log | 31.0 | 217 | 0.3985 | 0.855 | 0.2609 | 1.0374 | 0.855 | 0.8424 | 0.2074 | 0.0472 | | No log | 32.0 | 224 | 0.3986 | 0.85 | 0.2596 | 1.0403 | 0.85 | 0.8392 | 0.2184 | 0.0462 | | No log | 33.0 | 231 | 0.3990 | 0.85 | 0.2603 | 1.0369 | 0.85 | 0.8392 | 0.2079 | 0.0470 | | No log | 34.0 | 238 | 0.3982 | 0.85 | 0.2600 | 0.9765 | 0.85 | 0.8392 | 0.2160 | 0.0467 | | No log | 35.0 | 245 | 0.3977 | 0.85 | 0.2601 | 0.9762 | 0.85 | 0.8392 | 0.2108 | 0.0465 | | No log | 36.0 | 252 | 0.3977 | 0.85 | 0.2600 | 1.0372 | 0.85 | 0.8392 | 0.2075 | 0.0466 | | No log | 37.0 | 259 | 0.3972 | 0.85 | 0.2597 | 1.0383 | 0.85 | 0.8392 | 0.2091 | 0.0465 | | No log | 38.0 | 266 | 0.3967 | 0.85 | 0.2590 | 0.9796 | 0.85 | 0.8392 | 0.1987 | 0.0461 | | No log | 39.0 | 273 | 0.3979 | 0.85 | 0.2601 | 1.0390 | 0.85 | 0.8392 | 0.1991 | 0.0467 | | No log | 40.0 | 280 | 0.3976 | 0.85 | 0.2601 | 0.9775 | 0.85 | 0.8392 | 0.2175 | 0.0465 | | No log | 41.0 | 287 | 0.3979 | 0.85 | 0.2603 | 0.9796 | 0.85 | 0.8392 | 0.1930 | 0.0467 | | No log | 42.0 | 294 | 0.3973 | 0.85 | 0.2598 | 0.9746 | 0.85 | 0.8392 | 0.2175 | 0.0468 | | No log | 43.0 | 301 | 0.3972 | 0.85 | 0.2598 | 0.9798 | 0.85 | 0.8392 | 0.1931 | 0.0466 | | No log | 44.0 | 308 | 0.3969 | 0.85 | 0.2594 | 0.9784 | 0.85 | 0.8392 | 0.2094 | 0.0465 | | No log | 45.0 | 315 | 0.3971 | 0.85 | 0.2596 | 0.9847 | 0.85 | 0.8392 | 0.2033 | 0.0464 | | No log | 46.0 | 322 | 0.3969 | 0.85 | 0.2597 | 0.9768 | 0.85 | 0.8392 | 0.2100 | 0.0465 | | No log | 47.0 | 329 | 0.3974 | 0.85 | 0.2599 | 0.9788 | 0.85 | 0.8392 | 0.2090 | 0.0467 | | No log | 48.0 | 336 | 0.3971 | 0.85 | 0.2596 | 0.9797 | 0.85 | 0.8392 | 0.1977 | 0.0463 | | No log | 49.0 | 343 | 0.3972 | 0.85 | 0.2597 | 0.9391 | 0.85 | 0.8392 | 0.1903 | 0.0465 | | No log | 50.0 | 350 | 0.3969 | 0.85 | 0.2596 | 0.9802 | 0.85 | 0.8392 | 0.1985 | 0.0464 | | No log | 51.0 | 357 | 0.3970 | 0.85 | 0.2596 | 0.9795 | 0.85 | 0.8392 | 0.2161 | 0.0463 | | No log | 52.0 | 364 | 0.3973 | 0.85 | 0.2597 | 0.9333 | 0.85 | 0.8392 | 0.1983 | 0.0462 | | No log | 53.0 | 371 | 0.3971 | 0.85 | 0.2597 | 0.9408 | 0.85 | 0.8392 | 0.2022 | 0.0467 | | No log | 54.0 | 378 | 0.3970 | 0.85 | 0.2595 | 0.9371 | 0.85 | 0.8392 | 0.1992 | 0.0460 | | No log | 55.0 | 385 | 0.3970 | 0.85 | 0.2596 | 0.9262 | 0.85 | 0.8392 | 0.1917 | 0.0464 | | No log | 56.0 | 392 | 0.3971 | 0.85 | 0.2595 | 0.9195 | 0.85 | 0.8392 | 0.1927 | 0.0461 | | No log | 57.0 | 399 | 0.3970 | 0.85 | 0.2596 | 0.9789 | 0.85 | 0.8392 | 0.1992 | 0.0462 | | No log | 58.0 | 406 | 0.3968 | 0.85 | 0.2594 | 0.9255 | 0.85 | 0.8392 | 0.1929 | 0.0462 | | No log | 59.0 | 413 | 0.3967 | 0.85 | 0.2593 | 0.9795 | 0.85 | 0.8392 | 0.1996 | 0.0459 | | No log | 60.0 | 420 | 0.3970 | 0.85 | 0.2596 | 0.9787 | 0.85 | 0.8392 | 0.1994 | 0.0461 | | No log | 61.0 | 427 | 0.3967 | 0.85 | 0.2594 | 0.9803 | 0.85 | 0.8392 | 0.2073 | 0.0461 | | No log | 62.0 | 434 | 0.3968 | 0.85 | 0.2594 | 0.9325 | 0.85 | 0.8392 | 0.1996 | 0.0460 | | No log | 63.0 | 441 | 0.3968 | 0.85 | 0.2595 | 0.9276 | 0.85 | 0.8392 | 0.2063 | 0.0459 | | No log | 64.0 | 448 | 0.3968 | 0.85 | 0.2595 | 0.9247 | 0.85 | 0.8392 | 0.1991 | 0.0461 | | No log | 65.0 | 455 | 0.3968 | 0.85 | 0.2595 | 0.9301 | 0.85 | 0.8392 | 0.1989 | 0.0459 | | No log | 66.0 | 462 | 0.3968 | 0.85 | 0.2595 | 0.9310 | 0.85 | 0.8392 | 0.1922 | 0.0459 | | No log | 67.0 | 469 | 0.3968 | 0.85 | 0.2595 | 0.9250 | 0.85 | 0.8392 | 0.2061 | 0.0459 | | No log | 68.0 | 476 | 0.3968 | 0.85 | 0.2594 | 0.9234 | 0.85 | 0.8392 | 0.1994 | 0.0461 | | No log | 69.0 | 483 | 0.3967 | 0.85 | 0.2594 | 0.9257 | 0.85 | 0.8392 | 0.2065 | 0.0459 | | No log | 70.0 | 490 | 0.3967 | 0.85 | 0.2594 | 0.9205 | 0.85 | 0.8392 | 0.1840 | 0.0459 | | No log | 71.0 | 497 | 0.3967 | 0.85 | 0.2594 | 0.9258 | 0.85 | 0.8392 | 0.2017 | 0.0458 | | 0.1666 | 72.0 | 504 | 0.3969 | 0.85 | 0.2594 | 0.9297 | 0.85 | 0.8392 | 0.2017 | 0.0458 | | 0.1666 | 73.0 | 511 | 0.3966 | 0.85 | 0.2593 | 0.9223 | 0.85 | 0.8392 | 0.1920 | 0.0457 | | 0.1666 | 74.0 | 518 | 0.3967 | 0.85 | 0.2594 | 0.9228 | 0.85 | 0.8392 | 0.1920 | 0.0459 | | 0.1666 | 75.0 | 525 | 0.3967 | 0.85 | 0.2594 | 0.9257 | 0.85 | 0.8392 | 0.1919 | 0.0459 | | 0.1666 | 76.0 | 532 | 0.3966 | 0.85 | 0.2593 | 0.9232 | 0.85 | 0.8392 | 0.1994 | 0.0458 | | 0.1666 | 77.0 | 539 | 0.3968 | 0.85 | 0.2594 | 0.9224 | 0.85 | 0.8392 | 0.1920 | 0.0459 | | 0.1666 | 78.0 | 546 | 0.3966 | 0.85 | 0.2593 | 0.9242 | 0.85 | 0.8392 | 0.1918 | 0.0458 | | 0.1666 | 79.0 | 553 | 0.3967 | 0.85 | 0.2594 | 0.9233 | 0.85 | 0.8392 | 0.1920 | 0.0459 | | 0.1666 | 80.0 | 560 | 0.3968 | 0.85 | 0.2594 | 0.9241 | 0.85 | 0.8392 | 0.1919 | 0.0458 | | 0.1666 | 81.0 | 567 | 0.3967 | 0.85 | 0.2594 | 0.9225 | 0.85 | 0.8392 | 0.1918 | 0.0459 | | 0.1666 | 82.0 | 574 | 0.3967 | 0.85 | 0.2594 | 0.9233 | 0.85 | 0.8392 | 0.1919 | 0.0459 | | 0.1666 | 83.0 | 581 | 0.3967 | 0.85 | 0.2593 | 0.9246 | 0.85 | 0.8392 | 0.1919 | 0.0458 | | 0.1666 | 84.0 | 588 | 0.3966 | 0.85 | 0.2593 | 0.9229 | 0.85 | 0.8392 | 0.2017 | 0.0458 | | 0.1666 | 85.0 | 595 | 0.3966 | 0.85 | 0.2593 | 0.9232 | 0.85 | 0.8392 | 0.2017 | 0.0458 | | 0.1666 | 86.0 | 602 | 0.3967 | 0.85 | 0.2593 | 0.9225 | 0.85 | 0.8392 | 0.1920 | 0.0458 | | 0.1666 | 87.0 | 609 | 0.3966 | 0.85 | 0.2593 | 0.9214 | 0.85 | 0.8392 | 0.1999 | 0.0458 | | 0.1666 | 88.0 | 616 | 0.3967 | 0.85 | 0.2593 | 0.9214 | 0.85 | 0.8392 | 0.1920 | 0.0458 | | 0.1666 | 89.0 | 623 | 0.3966 | 0.85 | 0.2593 | 0.9227 | 0.85 | 0.8392 | 0.2097 | 0.0458 | | 0.1666 | 90.0 | 630 | 0.3967 | 0.85 | 0.2594 | 0.9219 | 0.85 | 0.8392 | 0.1919 | 0.0458 | | 0.1666 | 91.0 | 637 | 0.3966 | 0.85 | 0.2593 | 0.9212 | 0.85 | 0.8392 | 0.1994 | 0.0458 | | 0.1666 | 92.0 | 644 | 0.3966 | 0.85 | 0.2593 | 0.9227 | 0.85 | 0.8392 | 0.1919 | 0.0458 | | 0.1666 | 93.0 | 651 | 0.3966 | 0.85 | 0.2593 | 0.9231 | 0.85 | 0.8392 | 0.2017 | 0.0458 | | 0.1666 | 94.0 | 658 | 0.3967 | 0.85 | 0.2593 | 0.9220 | 0.85 | 0.8392 | 0.1919 | 0.0458 | | 0.1666 | 95.0 | 665 | 0.3966 | 0.85 | 0.2593 | 0.9217 | 0.85 | 0.8392 | 0.1920 | 0.0457 | | 0.1666 | 96.0 | 672 | 0.3966 | 0.85 | 0.2593 | 0.9218 | 0.85 | 0.8392 | 0.1920 | 0.0458 | | 0.1666 | 97.0 | 679 | 0.3966 | 0.85 | 0.2593 | 0.9221 | 0.85 | 0.8392 | 0.1920 | 0.0458 | | 0.1666 | 98.0 | 686 | 0.3966 | 0.85 | 0.2593 | 0.9224 | 0.85 | 0.8392 | 0.1920 | 0.0457 | | 0.1666 | 99.0 | 693 | 0.3966 | 0.85 | 0.2593 | 0.9224 | 0.85 | 0.8392 | 0.1994 | 0.0457 | | 0.1666 | 100.0 | 700 | 0.3966 | 0.85 | 0.2593 | 0.9223 | 0.85 | 0.8392 | 0.1994 | 0.0457 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
jordyvl/vit-tiny_rvl_cdip_100_examples_per_class_kd_MSE
jordyvl
2023-07-10T21:13:05Z
164
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-10T20:08:22Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-tiny_rvl_cdip_100_examples_per_class_kd_MSE results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-tiny_rvl_cdip_100_examples_per_class_kd_MSE This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7723 - Accuracy: 0.6025 - Brier Loss: 0.5295 - Nll: 3.6748 - F1 Micro: 0.6025 - F1 Macro: 0.6055 - Ece: 0.1688 - Aurc: 0.1708 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 25 | 4.7870 | 0.065 | 0.9655 | 17.0930 | 0.065 | 0.0550 | 0.1747 | 0.9357 | | No log | 2.0 | 50 | 3.9498 | 0.205 | 0.8858 | 9.5780 | 0.205 | 0.1863 | 0.1692 | 0.6618 | | No log | 3.0 | 75 | 3.3698 | 0.3675 | 0.7672 | 6.4908 | 0.3675 | 0.3392 | 0.1676 | 0.4195 | | No log | 4.0 | 100 | 2.9935 | 0.4075 | 0.6958 | 5.5595 | 0.4075 | 0.3820 | 0.1828 | 0.3327 | | No log | 5.0 | 125 | 2.8351 | 0.455 | 0.6591 | 4.8619 | 0.455 | 0.4351 | 0.1561 | 0.2833 | | No log | 6.0 | 150 | 2.8196 | 0.4725 | 0.6595 | 4.7785 | 0.4725 | 0.4367 | 0.1808 | 0.2790 | | No log | 7.0 | 175 | 2.6352 | 0.5075 | 0.6234 | 4.9881 | 0.5075 | 0.4886 | 0.1563 | 0.2493 | | No log | 8.0 | 200 | 2.5325 | 0.525 | 0.6162 | 4.3297 | 0.525 | 0.5026 | 0.1724 | 0.2365 | | No log | 9.0 | 225 | 2.5459 | 0.53 | 0.6099 | 5.1608 | 0.53 | 0.5148 | 0.1944 | 0.2350 | | No log | 10.0 | 250 | 2.5573 | 0.5325 | 0.6161 | 5.4495 | 0.5325 | 0.5212 | 0.2052 | 0.2397 | | No log | 11.0 | 275 | 2.3199 | 0.5675 | 0.5828 | 4.1247 | 0.5675 | 0.5626 | 0.1849 | 0.2071 | | No log | 12.0 | 300 | 2.2917 | 0.565 | 0.5758 | 4.1738 | 0.565 | 0.5694 | 0.1992 | 0.2023 | | No log | 13.0 | 325 | 2.2744 | 0.555 | 0.5974 | 4.2323 | 0.555 | 0.5544 | 0.1982 | 0.2203 | | No log | 14.0 | 350 | 2.1638 | 0.5625 | 0.5807 | 4.2049 | 0.5625 | 0.5629 | 0.1868 | 0.2049 | | No log | 15.0 | 375 | 2.1934 | 0.5575 | 0.5903 | 4.3813 | 0.5575 | 0.5614 | 0.1868 | 0.2022 | | No log | 16.0 | 400 | 2.1092 | 0.5625 | 0.5702 | 3.6094 | 0.5625 | 0.5700 | 0.1846 | 0.2011 | | No log | 17.0 | 425 | 2.0379 | 0.5875 | 0.5642 | 4.4351 | 0.5875 | 0.5822 | 0.2036 | 0.1959 | | No log | 18.0 | 450 | 2.0303 | 0.5825 | 0.5558 | 3.6847 | 0.5825 | 0.5820 | 0.1684 | 0.1881 | | No log | 19.0 | 475 | 2.0506 | 0.57 | 0.5749 | 4.0014 | 0.57 | 0.5708 | 0.1725 | 0.2027 | | 1.5026 | 20.0 | 500 | 1.9932 | 0.5875 | 0.5524 | 3.8003 | 0.5875 | 0.5914 | 0.1843 | 0.1831 | | 1.5026 | 21.0 | 525 | 2.0131 | 0.565 | 0.5643 | 4.0681 | 0.565 | 0.5635 | 0.1776 | 0.1957 | | 1.5026 | 22.0 | 550 | 2.0162 | 0.5725 | 0.5712 | 3.7068 | 0.5725 | 0.5766 | 0.1934 | 0.1955 | | 1.5026 | 23.0 | 575 | 1.9093 | 0.605 | 0.5381 | 3.7930 | 0.605 | 0.6032 | 0.1539 | 0.1749 | | 1.5026 | 24.0 | 600 | 1.9607 | 0.575 | 0.5561 | 4.5740 | 0.575 | 0.5789 | 0.1782 | 0.1902 | | 1.5026 | 25.0 | 625 | 1.8971 | 0.5825 | 0.5408 | 3.7290 | 0.5825 | 0.5754 | 0.1836 | 0.1751 | | 1.5026 | 26.0 | 650 | 1.9217 | 0.5775 | 0.5537 | 3.8085 | 0.5775 | 0.5844 | 0.1725 | 0.1843 | | 1.5026 | 27.0 | 675 | 1.9493 | 0.585 | 0.5606 | 3.6743 | 0.585 | 0.5953 | 0.1755 | 0.1882 | | 1.5026 | 28.0 | 700 | 1.8884 | 0.585 | 0.5437 | 3.7865 | 0.585 | 0.5828 | 0.1801 | 0.1822 | | 1.5026 | 29.0 | 725 | 1.9242 | 0.585 | 0.5479 | 3.9607 | 0.585 | 0.5856 | 0.1619 | 0.1817 | | 1.5026 | 30.0 | 750 | 1.8767 | 0.5975 | 0.5470 | 3.7995 | 0.5975 | 0.5966 | 0.1599 | 0.1790 | | 1.5026 | 31.0 | 775 | 1.8723 | 0.5925 | 0.5337 | 3.8962 | 0.5925 | 0.5972 | 0.1678 | 0.1729 | | 1.5026 | 32.0 | 800 | 1.9093 | 0.585 | 0.5545 | 3.8776 | 0.585 | 0.5830 | 0.1902 | 0.1841 | | 1.5026 | 33.0 | 825 | 1.8667 | 0.595 | 0.5363 | 3.8926 | 0.595 | 0.5917 | 0.1772 | 0.1745 | | 1.5026 | 34.0 | 850 | 1.8403 | 0.59 | 0.5521 | 3.8560 | 0.59 | 0.5953 | 0.1711 | 0.1800 | | 1.5026 | 35.0 | 875 | 1.8464 | 0.5925 | 0.5380 | 4.0376 | 0.5925 | 0.5970 | 0.1719 | 0.1756 | | 1.5026 | 36.0 | 900 | 1.8441 | 0.5975 | 0.5411 | 3.7193 | 0.5975 | 0.6008 | 0.1569 | 0.1753 | | 1.5026 | 37.0 | 925 | 1.8599 | 0.5875 | 0.5402 | 3.9139 | 0.5875 | 0.5908 | 0.1779 | 0.1789 | | 1.5026 | 38.0 | 950 | 1.8559 | 0.6 | 0.5458 | 3.8970 | 0.6 | 0.5991 | 0.1583 | 0.1804 | | 1.5026 | 39.0 | 975 | 1.8285 | 0.61 | 0.5370 | 3.6292 | 0.61 | 0.6155 | 0.1623 | 0.1722 | | 0.0745 | 40.0 | 1000 | 1.8309 | 0.5975 | 0.5432 | 3.6865 | 0.5975 | 0.6017 | 0.1663 | 0.1821 | | 0.0745 | 41.0 | 1025 | 1.8237 | 0.59 | 0.5348 | 3.6213 | 0.59 | 0.5921 | 0.1695 | 0.1738 | | 0.0745 | 42.0 | 1050 | 1.8421 | 0.605 | 0.5360 | 3.8592 | 0.605 | 0.6048 | 0.1601 | 0.1743 | | 0.0745 | 43.0 | 1075 | 1.8158 | 0.5975 | 0.5300 | 3.4537 | 0.5975 | 0.5953 | 0.1696 | 0.1707 | | 0.0745 | 44.0 | 1100 | 1.8238 | 0.5875 | 0.5358 | 3.7706 | 0.5875 | 0.5923 | 0.1797 | 0.1754 | | 0.0745 | 45.0 | 1125 | 1.8214 | 0.595 | 0.5463 | 3.4742 | 0.595 | 0.5981 | 0.1800 | 0.1770 | | 0.0745 | 46.0 | 1150 | 1.8162 | 0.5925 | 0.5317 | 3.9260 | 0.5925 | 0.5950 | 0.1646 | 0.1733 | | 0.0745 | 47.0 | 1175 | 1.8050 | 0.5975 | 0.5392 | 3.8322 | 0.5975 | 0.5979 | 0.1794 | 0.1763 | | 0.0745 | 48.0 | 1200 | 1.8214 | 0.5975 | 0.5347 | 3.7965 | 0.5975 | 0.6009 | 0.1555 | 0.1746 | | 0.0745 | 49.0 | 1225 | 1.7813 | 0.6 | 0.5294 | 3.8398 | 0.6 | 0.6005 | 0.1674 | 0.1688 | | 0.0745 | 50.0 | 1250 | 1.8179 | 0.6075 | 0.5336 | 3.4690 | 0.6075 | 0.6112 | 0.1743 | 0.1748 | | 0.0745 | 51.0 | 1275 | 1.7953 | 0.595 | 0.5380 | 3.7781 | 0.595 | 0.5990 | 0.1380 | 0.1727 | | 0.0745 | 52.0 | 1300 | 1.7897 | 0.6 | 0.5323 | 3.7412 | 0.6 | 0.6013 | 0.1603 | 0.1707 | | 0.0745 | 53.0 | 1325 | 1.8072 | 0.59 | 0.5428 | 3.5993 | 0.59 | 0.5947 | 0.1571 | 0.1773 | | 0.0745 | 54.0 | 1350 | 1.7834 | 0.605 | 0.5219 | 3.7600 | 0.605 | 0.6049 | 0.1563 | 0.1671 | | 0.0745 | 55.0 | 1375 | 1.7920 | 0.595 | 0.5361 | 3.5986 | 0.595 | 0.5978 | 0.1512 | 0.1717 | | 0.0745 | 56.0 | 1400 | 1.8074 | 0.5925 | 0.5387 | 3.5383 | 0.5925 | 0.5962 | 0.1669 | 0.1741 | | 0.0745 | 57.0 | 1425 | 1.7893 | 0.605 | 0.5346 | 3.6929 | 0.605 | 0.6039 | 0.1641 | 0.1681 | | 0.0745 | 58.0 | 1450 | 1.7787 | 0.6 | 0.5317 | 3.7652 | 0.6 | 0.6004 | 0.1850 | 0.1726 | | 0.0745 | 59.0 | 1475 | 1.7888 | 0.595 | 0.5323 | 3.4558 | 0.595 | 0.5975 | 0.1797 | 0.1732 | | 0.0231 | 60.0 | 1500 | 1.8064 | 0.58 | 0.5332 | 3.7773 | 0.58 | 0.5839 | 0.1819 | 0.1762 | | 0.0231 | 61.0 | 1525 | 1.7795 | 0.6075 | 0.5298 | 3.7998 | 0.6075 | 0.6086 | 0.1678 | 0.1704 | | 0.0231 | 62.0 | 1550 | 1.7826 | 0.595 | 0.5318 | 3.6741 | 0.595 | 0.5916 | 0.1550 | 0.1715 | | 0.0231 | 63.0 | 1575 | 1.7704 | 0.5925 | 0.5325 | 3.5942 | 0.5925 | 0.5941 | 0.1619 | 0.1712 | | 0.0231 | 64.0 | 1600 | 1.7901 | 0.6025 | 0.5289 | 3.4459 | 0.6025 | 0.6054 | 0.2022 | 0.1712 | | 0.0231 | 65.0 | 1625 | 1.7944 | 0.59 | 0.5381 | 3.7591 | 0.59 | 0.5910 | 0.1599 | 0.1756 | | 0.0231 | 66.0 | 1650 | 1.7721 | 0.605 | 0.5256 | 3.5227 | 0.605 | 0.6045 | 0.1525 | 0.1677 | | 0.0231 | 67.0 | 1675 | 1.7779 | 0.5975 | 0.5306 | 3.6792 | 0.5975 | 0.5994 | 0.1667 | 0.1714 | | 0.0231 | 68.0 | 1700 | 1.7724 | 0.6 | 0.5250 | 3.7552 | 0.6 | 0.6022 | 0.1818 | 0.1683 | | 0.0231 | 69.0 | 1725 | 1.7765 | 0.6025 | 0.5283 | 3.4264 | 0.6025 | 0.6019 | 0.1671 | 0.1700 | | 0.0231 | 70.0 | 1750 | 1.7784 | 0.6 | 0.5276 | 3.6887 | 0.6 | 0.6053 | 0.1715 | 0.1703 | | 0.0231 | 71.0 | 1775 | 1.7659 | 0.6 | 0.5282 | 3.6051 | 0.6 | 0.6006 | 0.1722 | 0.1691 | | 0.0231 | 72.0 | 1800 | 1.7882 | 0.5975 | 0.5329 | 3.5950 | 0.5975 | 0.6016 | 0.1981 | 0.1716 | | 0.0231 | 73.0 | 1825 | 1.7678 | 0.6 | 0.5287 | 3.6691 | 0.6 | 0.6032 | 0.1733 | 0.1696 | | 0.0231 | 74.0 | 1850 | 1.7716 | 0.6 | 0.5286 | 3.7576 | 0.6 | 0.6013 | 0.1734 | 0.1692 | | 0.0231 | 75.0 | 1875 | 1.7704 | 0.6 | 0.5299 | 3.5917 | 0.6 | 0.6016 | 0.1645 | 0.1709 | | 0.0231 | 76.0 | 1900 | 1.7729 | 0.6 | 0.5298 | 3.6758 | 0.6 | 0.6024 | 0.1766 | 0.1710 | | 0.0231 | 77.0 | 1925 | 1.7749 | 0.6 | 0.5308 | 3.6022 | 0.6 | 0.6030 | 0.1604 | 0.1717 | | 0.0231 | 78.0 | 1950 | 1.7720 | 0.6 | 0.5294 | 3.6759 | 0.6 | 0.6017 | 0.1786 | 0.1708 | | 0.0231 | 79.0 | 1975 | 1.7734 | 0.6025 | 0.5288 | 3.6765 | 0.6025 | 0.6048 | 0.1673 | 0.1698 | | 0.0059 | 80.0 | 2000 | 1.7709 | 0.6 | 0.5286 | 3.6755 | 0.6 | 0.6020 | 0.1749 | 0.1704 | | 0.0059 | 81.0 | 2025 | 1.7730 | 0.6 | 0.5295 | 3.6760 | 0.6 | 0.6020 | 0.1677 | 0.1708 | | 0.0059 | 82.0 | 2050 | 1.7723 | 0.6025 | 0.5295 | 3.6756 | 0.6025 | 0.6055 | 0.1626 | 0.1708 | | 0.0059 | 83.0 | 2075 | 1.7721 | 0.6025 | 0.5295 | 3.6741 | 0.6025 | 0.6055 | 0.1709 | 0.1708 | | 0.0059 | 84.0 | 2100 | 1.7725 | 0.6025 | 0.5297 | 3.6747 | 0.6025 | 0.6048 | 0.1627 | 0.1709 | | 0.0059 | 85.0 | 2125 | 1.7724 | 0.6025 | 0.5295 | 3.6751 | 0.6025 | 0.6055 | 0.1639 | 0.1707 | | 0.0059 | 86.0 | 2150 | 1.7724 | 0.6025 | 0.5296 | 3.6751 | 0.6025 | 0.6055 | 0.1630 | 0.1708 | | 0.0059 | 87.0 | 2175 | 1.7724 | 0.6025 | 0.5295 | 3.6749 | 0.6025 | 0.6055 | 0.1638 | 0.1707 | | 0.0059 | 88.0 | 2200 | 1.7722 | 0.6025 | 0.5295 | 3.6752 | 0.6025 | 0.6055 | 0.1645 | 0.1708 | | 0.0059 | 89.0 | 2225 | 1.7723 | 0.6025 | 0.5295 | 3.6747 | 0.6025 | 0.6055 | 0.1639 | 0.1708 | | 0.0059 | 90.0 | 2250 | 1.7723 | 0.6025 | 0.5294 | 3.6750 | 0.6025 | 0.6055 | 0.1643 | 0.1708 | | 0.0059 | 91.0 | 2275 | 1.7723 | 0.6025 | 0.5294 | 3.6750 | 0.6025 | 0.6055 | 0.1643 | 0.1708 | | 0.0059 | 92.0 | 2300 | 1.7723 | 0.6025 | 0.5295 | 3.6747 | 0.6025 | 0.6055 | 0.1639 | 0.1708 | | 0.0059 | 93.0 | 2325 | 1.7723 | 0.6025 | 0.5295 | 3.6749 | 0.6025 | 0.6055 | 0.1637 | 0.1707 | | 0.0059 | 94.0 | 2350 | 1.7722 | 0.6025 | 0.5295 | 3.6749 | 0.6025 | 0.6055 | 0.1688 | 0.1708 | | 0.0059 | 95.0 | 2375 | 1.7723 | 0.6025 | 0.5295 | 3.6748 | 0.6025 | 0.6055 | 0.1643 | 0.1708 | | 0.0059 | 96.0 | 2400 | 1.7723 | 0.6025 | 0.5294 | 3.6748 | 0.6025 | 0.6055 | 0.1643 | 0.1707 | | 0.0059 | 97.0 | 2425 | 1.7723 | 0.6025 | 0.5295 | 3.6748 | 0.6025 | 0.6055 | 0.1688 | 0.1708 | | 0.0059 | 98.0 | 2450 | 1.7723 | 0.6025 | 0.5295 | 3.6749 | 0.6025 | 0.6055 | 0.1643 | 0.1708 | | 0.0059 | 99.0 | 2475 | 1.7723 | 0.6025 | 0.5295 | 3.6749 | 0.6025 | 0.6055 | 0.1688 | 0.1708 | | 0.0 | 100.0 | 2500 | 1.7723 | 0.6025 | 0.5295 | 3.6748 | 0.6025 | 0.6055 | 0.1688 | 0.1708 | ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.12.0 - Tokenizers 0.12.1
ALazcanoG/nominal-groups-recognition-bert-base-spanish-wwm-cased
ALazcanoG
2023-07-10T21:09:53Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "es", "dataset:ALazcanoG/spanish_nominal_groups_conll2003", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-07T23:46:21Z
--- language: - es tags: - generated_from_trainer datasets: - ALazcanoG/spanish_nominal_groups_conll2003 model-index: - name: nominal-groups-recognition-bert-base-spanish-wwm-cased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nominal-groups-recognition-bert-base-spanish-wwm-cased This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the ALazcanoG/spanish_nominal_groups_conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.3362 - Body Part Precision: 0.6830 - Body Part Recall: 0.7409 - Body Part F1: 0.7108 - Body Part Number: 413 - Disease Precision: 0.7439 - Disease Recall: 0.7446 - Disease F1: 0.7442 - Disease Number: 975 - Family Member Precision: 0.7941 - Family Member Recall: 0.9 - Family Member F1: 0.8438 - Family Member Number: 30 - Medication Precision: 0.8734 - Medication Recall: 0.7419 - Medication F1: 0.8023 - Medication Number: 93 - Procedure Precision: 0.6190 - Procedure Recall: 0.6270 - Procedure F1: 0.6230 - Procedure Number: 311 - Overall Precision: 0.7144 - Overall Recall: 0.7261 - Overall F1: 0.7202 - Overall Accuracy: 0.9175 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 13 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.4335 | 1.0 | 1004 | 0.3011 | 0.5944 | 0.7167 | 0.6498 | 413 | 0.7014 | 0.7036 | 0.7025 | 975 | 0.8 | 0.8 | 0.8000 | 30 | 0.7875 | 0.6774 | 0.7283 | 93 | 0.6007 | 0.5177 | 0.5561 | 311 | 0.6634 | 0.6751 | 0.6692 | 0.9063 | | 0.2379 | 2.0 | 2008 | 0.2920 | 0.6995 | 0.7215 | 0.7104 | 413 | 0.7655 | 0.7097 | 0.7366 | 975 | 0.75 | 0.8 | 0.7742 | 30 | 0.7667 | 0.7419 | 0.7541 | 93 | 0.6094 | 0.6270 | 0.6181 | 311 | 0.7212 | 0.7014 | 0.7112 | 0.9140 | | 0.1629 | 3.0 | 3012 | 0.3022 | 0.6674 | 0.7530 | 0.7076 | 413 | 0.7286 | 0.7241 | 0.7263 | 975 | 0.8571 | 0.8 | 0.8276 | 30 | 0.8519 | 0.7419 | 0.7931 | 93 | 0.5994 | 0.6495 | 0.6235 | 311 | 0.6975 | 0.7201 | 0.7086 | 0.9170 | | 0.1143 | 4.0 | 4016 | 0.3362 | 0.6830 | 0.7409 | 0.7108 | 413 | 0.7439 | 0.7446 | 0.7442 | 975 | 0.7941 | 0.9 | 0.8438 | 30 | 0.8734 | 0.7419 | 0.8023 | 93 | 0.6190 | 0.6270 | 0.6230 | 311 | 0.7144 | 0.7261 | 0.7202 | 0.9175 | | 0.0861 | 5.0 | 5020 | 0.3643 | 0.6806 | 0.7482 | 0.7128 | 413 | 0.7428 | 0.7436 | 0.7432 | 975 | 0.8182 | 0.9 | 0.8571 | 30 | 0.8831 | 0.7312 | 0.8000 | 93 | 0.5928 | 0.6367 | 0.6140 | 311 | 0.7081 | 0.7283 | 0.7181 | 0.9163 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Belphegor/ppo-LunarLander-v2
Belphegor
2023-07-10T21:08:44Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T21:08:27Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 268.37 +/- 18.85 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
skrl/IsaacGymEnvs-FactoryTaskNutBoltPick-PPO
skrl
2023-07-10T20:49:13Z
0
0
skrl
[ "skrl", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T19:46:39Z
--- library_name: skrl tags: - deep-reinforcement-learning - reinforcement-learning - skrl model-index: - name: PPO results: - metrics: - type: mean_reward value: -13.83 +/- 0.26 name: Total reward (mean) task: type: reinforcement-learning name: reinforcement-learning dataset: name: IsaacGymEnvs-FactoryTaskNutBoltPick type: IsaacGymEnvs-FactoryTaskNutBoltPick --- <!-- --- torch: -14.79 +/- 2.68 jax: -13.87 +/- 0.06 numpy: -13.83 +/- 0.26 --- --> # IsaacGymEnvs-FactoryTaskNutBoltPick-PPO Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments. - **Task:** FactoryTaskNutBoltPick - **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html) # Usage (with skrl) Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts. * PyTorch ```python from skrl.utils.huggingface import download_model_from_huggingface # assuming that there is an agent named `agent` path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltPick-PPO", filename="agent.pt") agent.load(path) ``` * JAX ```python from skrl.utils.huggingface import download_model_from_huggingface # assuming that there is an agent named `agent` path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltPick-PPO", filename="agent.pickle") agent.load(path) ``` # Hyperparameters Note: Undefined parameters keep their values by default. ```python # https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters cfg = PPO_DEFAULT_CONFIG.copy() cfg["rollouts"] = 120 # memory_size cfg["learning_epochs"] = 8 cfg["mini_batches"] = 30 # 120 * 128 / 512 cfg["discount_factor"] = 0.99 cfg["lambda"] = 0.95 cfg["learning_rate"] = 1e-4 cfg["random_timesteps"] = 0 cfg["learning_starts"] = 0 cfg["grad_norm_clip"] = 0 cfg["ratio_clip"] = 0.2 cfg["value_clip"] = 0.2 cfg["clip_predicted_values"] = True cfg["entropy_loss_scale"] = 0.0 cfg["value_loss_scale"] = 1.0 cfg["kl_threshold"] = 0.016 cfg["rewards_shaper"] = None cfg["state_preprocessor"] = RunningStandardScaler cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device} cfg["value_preprocessor"] = RunningStandardScaler cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device} ```
Henk717/chronoboros-33B
Henk717
2023-07-10T20:48:47Z
1,410
9
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-09T21:00:09Z
--- license: other --- This model was the result of a 50/50 average weight merge between Airoboros-33B-1.4 and Chronos-33B. After prolonged testing we concluded that while this merge is highly flexible and capable of many different tasks, it has to much variation in how it answers to be reliable. Because of this the model relies on some luck to get good results, and is therefore not recommended to people seeking a consistent experience, or people sensitive to anticipation based addictions. If you would like an improved version of this model that is more stable check out my Airochronos-33B merge.
voyzan/unit1-lunar_lander_v2-A02
voyzan
2023-07-10T20:47:07Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T20:46:49Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 255.03 +/- 23.01 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jliu596/flappybirdknockoff
jliu596
2023-07-10T20:45:22Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T20:40:49Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: flappybirdknockoff results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 13.40 +/- 11.34 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
grace-pro/afriberta-large-finetuned-hausa
grace-pro
2023-07-10T20:28:21Z
127
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-10T19:28:18Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: afriberta-large-finetuned-hausa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afriberta-large-finetuned-hausa This model is a fine-tuned version of [castorini/afriberta_large](https://huggingface.co/castorini/afriberta_large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1448 - Precision: 0.7114 - Recall: 0.5238 - F1: 0.6034 - Accuracy: 0.9652 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1373 | 1.0 | 2624 | 0.1267 | 0.6804 | 0.4519 | 0.5431 | 0.9612 | | 0.1102 | 2.0 | 5248 | 0.1186 | 0.6927 | 0.5020 | 0.5821 | 0.9635 | | 0.0849 | 3.0 | 7872 | 0.1269 | 0.7114 | 0.5036 | 0.5897 | 0.9645 | | 0.0683 | 4.0 | 10496 | 0.1341 | 0.7159 | 0.5078 | 0.5941 | 0.9650 | | 0.0567 | 5.0 | 13120 | 0.1448 | 0.7114 | 0.5238 | 0.6034 | 0.9652 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
skrl/IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO
skrl
2023-07-10T20:15:49Z
0
0
skrl
[ "skrl", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T19:47:18Z
--- library_name: skrl tags: - deep-reinforcement-learning - reinforcement-learning - skrl model-index: - name: PPO results: - metrics: - type: mean_reward value: -38.54 +/- 17.49 name: Total reward (mean) task: type: reinforcement-learning name: reinforcement-learning dataset: name: IsaacGymEnvs-FactoryTaskNutBoltPlace type: IsaacGymEnvs-FactoryTaskNutBoltPlace --- <!-- --- torch: -38.54 +/- 17.49 jax: -60.9 +/- 0.84 numpy: -58.9 +/- 1.8 --- --> # IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments. - **Task:** FactoryTaskNutBoltPlace - **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html) # Usage (with skrl) Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts. * PyTorch ```python from skrl.utils.huggingface import download_model_from_huggingface # assuming that there is an agent named `agent` path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO", filename="agent.pt") agent.load(path) ``` * JAX ```python from skrl.utils.huggingface import download_model_from_huggingface # assuming that there is an agent named `agent` path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO", filename="agent.pickle") agent.load(path) ``` # Hyperparameters Note: Undefined parameters keep their values by default. ```python # https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters cfg = PPO_DEFAULT_CONFIG.copy() cfg["rollouts"] = 120 # memory_size cfg["learning_epochs"] = 8 cfg["mini_batches"] = 30 # 120 * 128 / 512 cfg["discount_factor"] = 0.99 cfg["lambda"] = 0.95 cfg["learning_rate"] = 1e-4 cfg["random_timesteps"] = 0 cfg["learning_starts"] = 0 cfg["grad_norm_clip"] = 0 cfg["ratio_clip"] = 0.2 cfg["value_clip"] = 0.2 cfg["clip_predicted_values"] = True cfg["entropy_loss_scale"] = 0.0 cfg["value_loss_scale"] = 1.0 cfg["kl_threshold"] = 0.016 cfg["rewards_shaper"] = None cfg["state_preprocessor"] = RunningStandardScaler cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device} cfg["value_preprocessor"] = RunningStandardScaler cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device} ```
MaitreHibou/Reinforce-Pixelcopter-PLE-v0
MaitreHibou
2023-07-10T20:12:10Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T19:26:33Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 35.00 +/- 20.11 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
aphi/ppo-Pyramids
aphi
2023-07-10T20:09:36Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-10T20:09:29Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: aphi/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ALM-AHME/convnextv2-large-1k-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20-V2
ALM-AHME
2023-07-10T20:09:00Z
213
0
transformers
[ "transformers", "pytorch", "convnextv2", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-10T20:08:11Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: convnextv2-large-1k-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20-V2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnextv2-large-1k-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20-V2 This model is a fine-tuned version of [ALM-AHME/convnextv2-large-1k-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20](https://huggingface.co/ALM-AHME/convnextv2-large-1k-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.9 - num_epochs: 12 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
ruggedmug/ppo-LunarLander-v2
ruggedmug
2023-07-10T20:06:30Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T20:09:57Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 280.76 +/- 15.33 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mark-oppenheim/q-FrozenLake-v1-4x4-noSlippery
mark-oppenheim
2023-07-10T20:01:57Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T20:01:54Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="mark-oppenheim/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jordyvl/vit-small_tobacco3482_kd_CEKD_t2.5_a0.7
jordyvl
2023-07-10T19:58:33Z
163
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-10T19:17:30Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-small_tobacco3482_kd_CEKD_t2.5_a0.7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-small_tobacco3482_kd_CEKD_t2.5_a0.7 This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5012 - Accuracy: 0.845 - Brier Loss: 0.2630 - Nll: 0.9559 - F1 Micro: 0.845 - F1 Macro: 0.8409 - Ece: 0.2081 - Aurc: 0.0487 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 7 | 1.8874 | 0.22 | 0.8753 | 5.3264 | 0.22 | 0.1297 | 0.2491 | 0.6946 | | No log | 2.0 | 14 | 1.5114 | 0.405 | 0.7436 | 3.4265 | 0.405 | 0.2289 | 0.2861 | 0.4056 | | No log | 3.0 | 21 | 1.0555 | 0.61 | 0.5700 | 1.9393 | 0.61 | 0.5508 | 0.3108 | 0.2025 | | No log | 4.0 | 28 | 0.8188 | 0.695 | 0.4182 | 1.7503 | 0.695 | 0.6308 | 0.2374 | 0.1153 | | No log | 5.0 | 35 | 0.8072 | 0.74 | 0.3863 | 1.7844 | 0.74 | 0.7491 | 0.2368 | 0.1059 | | No log | 6.0 | 42 | 0.7259 | 0.745 | 0.3520 | 1.6780 | 0.745 | 0.7403 | 0.2147 | 0.0935 | | No log | 7.0 | 49 | 0.6764 | 0.805 | 0.3188 | 1.5699 | 0.805 | 0.7739 | 0.2217 | 0.0875 | | No log | 8.0 | 56 | 0.6437 | 0.805 | 0.3103 | 1.2385 | 0.805 | 0.7880 | 0.2198 | 0.0667 | | No log | 9.0 | 63 | 0.6173 | 0.79 | 0.3033 | 1.3362 | 0.79 | 0.7910 | 0.2158 | 0.0688 | | No log | 10.0 | 70 | 0.5831 | 0.83 | 0.2846 | 1.1457 | 0.83 | 0.8211 | 0.2293 | 0.0530 | | No log | 11.0 | 77 | 0.6568 | 0.775 | 0.3281 | 1.4185 | 0.775 | 0.7253 | 0.2128 | 0.0708 | | No log | 12.0 | 84 | 0.5838 | 0.815 | 0.2859 | 1.4799 | 0.815 | 0.8079 | 0.2191 | 0.0518 | | No log | 13.0 | 91 | 0.6773 | 0.75 | 0.3546 | 1.1980 | 0.75 | 0.7125 | 0.1970 | 0.0977 | | No log | 14.0 | 98 | 0.5809 | 0.82 | 0.2976 | 1.1146 | 0.82 | 0.8166 | 0.2240 | 0.0850 | | No log | 15.0 | 105 | 0.5589 | 0.8 | 0.2913 | 1.0169 | 0.8000 | 0.7942 | 0.2018 | 0.0655 | | No log | 16.0 | 112 | 0.5495 | 0.835 | 0.2857 | 1.1661 | 0.835 | 0.8217 | 0.1943 | 0.0800 | | No log | 17.0 | 119 | 0.5280 | 0.83 | 0.2697 | 1.0304 | 0.83 | 0.8314 | 0.2216 | 0.0511 | | No log | 18.0 | 126 | 0.5174 | 0.85 | 0.2583 | 1.1388 | 0.85 | 0.8392 | 0.1992 | 0.0540 | | No log | 19.0 | 133 | 0.5271 | 0.81 | 0.2838 | 0.9360 | 0.81 | 0.8129 | 0.1742 | 0.0622 | | No log | 20.0 | 140 | 0.5098 | 0.845 | 0.2604 | 1.1335 | 0.845 | 0.8353 | 0.2040 | 0.0599 | | No log | 21.0 | 147 | 0.5219 | 0.82 | 0.2725 | 0.9910 | 0.82 | 0.8145 | 0.1831 | 0.0497 | | No log | 22.0 | 154 | 0.5195 | 0.835 | 0.2706 | 1.0873 | 0.835 | 0.8390 | 0.1908 | 0.0515 | | No log | 23.0 | 161 | 0.5122 | 0.835 | 0.2666 | 1.0898 | 0.835 | 0.8399 | 0.1998 | 0.0487 | | No log | 24.0 | 168 | 0.5158 | 0.825 | 0.2723 | 1.0534 | 0.825 | 0.8210 | 0.1811 | 0.0507 | | No log | 25.0 | 175 | 0.5059 | 0.825 | 0.2654 | 0.9966 | 0.825 | 0.8212 | 0.1910 | 0.0487 | | No log | 26.0 | 182 | 0.5033 | 0.825 | 0.2648 | 0.9836 | 0.825 | 0.8212 | 0.1768 | 0.0516 | | No log | 27.0 | 189 | 0.5114 | 0.835 | 0.2703 | 0.9847 | 0.835 | 0.8353 | 0.2040 | 0.0512 | | No log | 28.0 | 196 | 0.5047 | 0.84 | 0.2654 | 0.9774 | 0.8400 | 0.8359 | 0.1868 | 0.0494 | | No log | 29.0 | 203 | 0.5027 | 0.84 | 0.2656 | 0.9674 | 0.8400 | 0.8359 | 0.1820 | 0.0502 | | No log | 30.0 | 210 | 0.5035 | 0.835 | 0.2660 | 0.9606 | 0.835 | 0.8296 | 0.1781 | 0.0494 | | No log | 31.0 | 217 | 0.5010 | 0.835 | 0.2642 | 0.9628 | 0.835 | 0.8296 | 0.1893 | 0.0487 | | No log | 32.0 | 224 | 0.5032 | 0.835 | 0.2652 | 0.9705 | 0.835 | 0.8296 | 0.1913 | 0.0494 | | No log | 33.0 | 231 | 0.5052 | 0.825 | 0.2664 | 0.9657 | 0.825 | 0.8231 | 0.1882 | 0.0503 | | No log | 34.0 | 238 | 0.5047 | 0.825 | 0.2667 | 0.9605 | 0.825 | 0.8168 | 0.1938 | 0.0508 | | No log | 35.0 | 245 | 0.5019 | 0.835 | 0.2642 | 0.9596 | 0.835 | 0.8296 | 0.1846 | 0.0491 | | No log | 36.0 | 252 | 0.5035 | 0.835 | 0.2648 | 0.9646 | 0.835 | 0.8296 | 0.2064 | 0.0492 | | No log | 37.0 | 259 | 0.5020 | 0.835 | 0.2645 | 0.9589 | 0.835 | 0.8296 | 0.2036 | 0.0491 | | No log | 38.0 | 266 | 0.5023 | 0.83 | 0.2642 | 0.9595 | 0.83 | 0.8262 | 0.1798 | 0.0495 | | No log | 39.0 | 273 | 0.5023 | 0.835 | 0.2643 | 0.9615 | 0.835 | 0.8296 | 0.1921 | 0.0491 | | No log | 40.0 | 280 | 0.5024 | 0.835 | 0.2645 | 0.9589 | 0.835 | 0.8292 | 0.1813 | 0.0499 | | No log | 41.0 | 287 | 0.5018 | 0.835 | 0.2638 | 0.9583 | 0.835 | 0.8296 | 0.1743 | 0.0492 | | No log | 42.0 | 294 | 0.5018 | 0.83 | 0.2640 | 0.9592 | 0.83 | 0.8266 | 0.1839 | 0.0498 | | No log | 43.0 | 301 | 0.5033 | 0.84 | 0.2650 | 0.9588 | 0.8400 | 0.8322 | 0.1914 | 0.0497 | | No log | 44.0 | 308 | 0.5009 | 0.83 | 0.2632 | 0.9603 | 0.83 | 0.8266 | 0.1652 | 0.0495 | | No log | 45.0 | 315 | 0.5049 | 0.835 | 0.2659 | 0.9587 | 0.835 | 0.8344 | 0.2130 | 0.0502 | | No log | 46.0 | 322 | 0.5018 | 0.835 | 0.2637 | 0.9592 | 0.835 | 0.8344 | 0.1685 | 0.0496 | | No log | 47.0 | 329 | 0.5009 | 0.835 | 0.2632 | 0.9578 | 0.835 | 0.8296 | 0.1971 | 0.0491 | | No log | 48.0 | 336 | 0.5022 | 0.835 | 0.2641 | 0.9574 | 0.835 | 0.8288 | 0.2050 | 0.0495 | | No log | 49.0 | 343 | 0.5017 | 0.835 | 0.2635 | 0.9586 | 0.835 | 0.8349 | 0.2025 | 0.0493 | | No log | 50.0 | 350 | 0.5022 | 0.84 | 0.2640 | 0.9572 | 0.8400 | 0.8322 | 0.1926 | 0.0493 | | No log | 51.0 | 357 | 0.5022 | 0.835 | 0.2638 | 0.9591 | 0.835 | 0.8288 | 0.1948 | 0.0494 | | No log | 52.0 | 364 | 0.5025 | 0.835 | 0.2640 | 0.9573 | 0.835 | 0.8344 | 0.2119 | 0.0497 | | No log | 53.0 | 371 | 0.5022 | 0.835 | 0.2638 | 0.9575 | 0.835 | 0.8292 | 0.2176 | 0.0497 | | No log | 54.0 | 378 | 0.5020 | 0.83 | 0.2637 | 0.9575 | 0.83 | 0.8257 | 0.1935 | 0.0498 | | No log | 55.0 | 385 | 0.5024 | 0.835 | 0.2640 | 0.9572 | 0.835 | 0.8288 | 0.2040 | 0.0491 | | No log | 56.0 | 392 | 0.5023 | 0.835 | 0.2639 | 0.9581 | 0.835 | 0.8288 | 0.2123 | 0.0492 | | No log | 57.0 | 399 | 0.5018 | 0.84 | 0.2635 | 0.9575 | 0.8400 | 0.8322 | 0.2086 | 0.0487 | | No log | 58.0 | 406 | 0.5023 | 0.835 | 0.2639 | 0.9568 | 0.835 | 0.8292 | 0.2094 | 0.0496 | | No log | 59.0 | 413 | 0.5016 | 0.83 | 0.2633 | 0.9563 | 0.83 | 0.8257 | 0.1930 | 0.0493 | | No log | 60.0 | 420 | 0.5015 | 0.84 | 0.2633 | 0.9565 | 0.8400 | 0.8322 | 0.2004 | 0.0488 | | No log | 61.0 | 427 | 0.5017 | 0.84 | 0.2635 | 0.9559 | 0.8400 | 0.8322 | 0.2004 | 0.0491 | | No log | 62.0 | 434 | 0.5018 | 0.83 | 0.2635 | 0.9563 | 0.83 | 0.8257 | 0.1994 | 0.0497 | | No log | 63.0 | 441 | 0.5020 | 0.835 | 0.2636 | 0.9572 | 0.835 | 0.8288 | 0.2040 | 0.0490 | | No log | 64.0 | 448 | 0.5020 | 0.835 | 0.2636 | 0.9565 | 0.835 | 0.8288 | 0.2036 | 0.0490 | | No log | 65.0 | 455 | 0.5017 | 0.835 | 0.2634 | 0.9566 | 0.835 | 0.8288 | 0.1959 | 0.0490 | | No log | 66.0 | 462 | 0.5018 | 0.84 | 0.2635 | 0.9561 | 0.8400 | 0.8322 | 0.2012 | 0.0491 | | No log | 67.0 | 469 | 0.5016 | 0.84 | 0.2633 | 0.9566 | 0.8400 | 0.8322 | 0.1946 | 0.0489 | | No log | 68.0 | 476 | 0.5016 | 0.84 | 0.2633 | 0.9565 | 0.8400 | 0.8322 | 0.1946 | 0.0488 | | No log | 69.0 | 483 | 0.5018 | 0.835 | 0.2634 | 0.9567 | 0.835 | 0.8288 | 0.1955 | 0.0490 | | No log | 70.0 | 490 | 0.5012 | 0.84 | 0.2631 | 0.9562 | 0.8400 | 0.8322 | 0.1945 | 0.0488 | | No log | 71.0 | 497 | 0.5018 | 0.835 | 0.2635 | 0.9568 | 0.835 | 0.8288 | 0.1958 | 0.0491 | | 0.1944 | 72.0 | 504 | 0.5016 | 0.84 | 0.2633 | 0.9562 | 0.8400 | 0.8322 | 0.2034 | 0.0490 | | 0.1944 | 73.0 | 511 | 0.5013 | 0.84 | 0.2632 | 0.9558 | 0.8400 | 0.8322 | 0.1944 | 0.0488 | | 0.1944 | 74.0 | 518 | 0.5013 | 0.84 | 0.2631 | 0.9562 | 0.8400 | 0.8322 | 0.1943 | 0.0487 | | 0.1944 | 75.0 | 525 | 0.5016 | 0.835 | 0.2633 | 0.9560 | 0.835 | 0.8344 | 0.2035 | 0.0495 | | 0.1944 | 76.0 | 532 | 0.5018 | 0.84 | 0.2634 | 0.9563 | 0.8400 | 0.8322 | 0.2093 | 0.0487 | | 0.1944 | 77.0 | 539 | 0.5012 | 0.84 | 0.2630 | 0.9565 | 0.8400 | 0.8322 | 0.1941 | 0.0488 | | 0.1944 | 78.0 | 546 | 0.5015 | 0.84 | 0.2632 | 0.9561 | 0.8400 | 0.8375 | 0.2008 | 0.0489 | | 0.1944 | 79.0 | 553 | 0.5016 | 0.835 | 0.2633 | 0.9560 | 0.835 | 0.8288 | 0.1957 | 0.0490 | | 0.1944 | 80.0 | 560 | 0.5015 | 0.84 | 0.2631 | 0.9568 | 0.8400 | 0.8375 | 0.2093 | 0.0488 | | 0.1944 | 81.0 | 567 | 0.5015 | 0.835 | 0.2632 | 0.9561 | 0.835 | 0.8288 | 0.1957 | 0.0491 | | 0.1944 | 82.0 | 574 | 0.5014 | 0.835 | 0.2631 | 0.9565 | 0.835 | 0.8288 | 0.1949 | 0.0489 | | 0.1944 | 83.0 | 581 | 0.5015 | 0.835 | 0.2632 | 0.9563 | 0.835 | 0.8288 | 0.1957 | 0.0490 | | 0.1944 | 84.0 | 588 | 0.5015 | 0.84 | 0.2632 | 0.9559 | 0.8400 | 0.8322 | 0.2031 | 0.0488 | | 0.1944 | 85.0 | 595 | 0.5012 | 0.84 | 0.2630 | 0.9560 | 0.8400 | 0.8322 | 0.1944 | 0.0488 | | 0.1944 | 86.0 | 602 | 0.5012 | 0.84 | 0.2630 | 0.9561 | 0.8400 | 0.8322 | 0.1944 | 0.0488 | | 0.1944 | 87.0 | 609 | 0.5012 | 0.84 | 0.2630 | 0.9562 | 0.8400 | 0.8322 | 0.1943 | 0.0488 | | 0.1944 | 88.0 | 616 | 0.5012 | 0.84 | 0.2630 | 0.9561 | 0.8400 | 0.8322 | 0.2030 | 0.0488 | | 0.1944 | 89.0 | 623 | 0.5013 | 0.845 | 0.2631 | 0.9559 | 0.845 | 0.8409 | 0.1995 | 0.0488 | | 0.1944 | 90.0 | 630 | 0.5013 | 0.845 | 0.2631 | 0.9559 | 0.845 | 0.8409 | 0.1995 | 0.0488 | | 0.1944 | 91.0 | 637 | 0.5012 | 0.845 | 0.2630 | 0.9559 | 0.845 | 0.8409 | 0.1994 | 0.0487 | | 0.1944 | 92.0 | 644 | 0.5013 | 0.845 | 0.2631 | 0.9561 | 0.845 | 0.8409 | 0.1995 | 0.0487 | | 0.1944 | 93.0 | 651 | 0.5012 | 0.84 | 0.2630 | 0.9560 | 0.8400 | 0.8322 | 0.2031 | 0.0488 | | 0.1944 | 94.0 | 658 | 0.5013 | 0.84 | 0.2630 | 0.9558 | 0.8400 | 0.8322 | 0.1944 | 0.0488 | | 0.1944 | 95.0 | 665 | 0.5012 | 0.84 | 0.2630 | 0.9558 | 0.8400 | 0.8322 | 0.1944 | 0.0488 | | 0.1944 | 96.0 | 672 | 0.5012 | 0.84 | 0.2630 | 0.9558 | 0.8400 | 0.8322 | 0.1944 | 0.0488 | | 0.1944 | 97.0 | 679 | 0.5012 | 0.845 | 0.2630 | 0.9559 | 0.845 | 0.8409 | 0.1994 | 0.0487 | | 0.1944 | 98.0 | 686 | 0.5012 | 0.84 | 0.2630 | 0.9559 | 0.8400 | 0.8322 | 0.1944 | 0.0488 | | 0.1944 | 99.0 | 693 | 0.5012 | 0.845 | 0.2630 | 0.9560 | 0.845 | 0.8409 | 0.2081 | 0.0487 | | 0.1944 | 100.0 | 700 | 0.5012 | 0.845 | 0.2630 | 0.9559 | 0.845 | 0.8409 | 0.2081 | 0.0487 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
Masik001/combined-GI-RVC-models
Masik001
2023-07-10T19:43:01Z
0
1
null
[ "region:us" ]
null
2023-07-10T19:42:16Z
===== Application Startup at 2023-07-10 13:56:11 ===== 2023-07-10 17:36:30 | INFO | faiss.loader | Loading faiss with AVX2 support. 2023-07-10 17:36:30 | INFO | faiss.loader | Successfully loaded faiss with AVX2 support. 没有发现支持的N卡, 使用CPU进行推理 2023-07-10 17:36:31 | INFO | fairseq.tasks.hubert_pretraining | current directory is /home/user/app 2023-07-10 17:36:31 | INFO | fairseq.tasks.hubert_pretraining | HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': 'metadata', 'fine_tuning': False, 'labels': ['km'], 'label_dir': 'label', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': False, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False} 2023-07-10 17:36:31 | INFO | fairseq.models.hubert.hubert | HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': default, 'encoder_layers': 12, 'encoder_embed_dim': 768, 'encoder_ffn_embed_dim': 3072, 'encoder_attention_heads': 12, 'activation_fn': gelu, 'layer_type': transformer, 'dropout': 0.1, 'attention_dropout': 0.1, 'activation_dropout': 0.0, 'encoder_layerdrop': 0.05, 'dropout_input': 0.1, 'dropout_features': 0.1, 'final_dim': 256, 'untie_final_proj': True, 'layer_norm_first': False, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.1, 'mask_length': 10, 'mask_prob': 0.8, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 10, 'mask_channel_prob': 0.0, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': False, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', 'pos_enc_type': 'abs', 'fp16': False} gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: aether-jp / added_IVF865_Flat_nprobe_1_aether-jp_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: albedo-jp / added_IVF641_Flat_nprobe_1_albedo-jp_v1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: alhaitham-jp / added_IVF519_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: ayaka-jp / added_IVF1018_Flat_nprobe_1_ayaka_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: ayato-jp / added_IVF1304_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: barbara-jp / added_IVF548_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: charlotte-jp / added_IVF1318_Flat_nprobe_1_charlotte-jp_v2_400.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: childe-jp / added_IVF684_Flat_nprobe_1_childe-v2_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: chongyun-jp / added_IVF545_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: cyno-jp / added_IVF380_Flat_nprobe_1_cyno-jp_v1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: diluc-jp / added_IVF1511_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: eula-jp / added_IVF2219_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: faruzan-jp / added_IVF256_Flat_nprobe_1_faruzan-jp_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: fischl-jp / added_IVF1225_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: ganyu-jp / added_IVF1636_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: heizou-jp / added_IVF466_Flat_nprobe_1_heizou-jp_v1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: hutao-jp / added_IVF265_Flat_nprobe_5.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: itto-jp / added_IVF4454_Flat_nprobe_1_itto-jp_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: kaeya-jp / added_IVF1655_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: kaveh-jp / added_IVF613_Flat_nprobe_1_kaveh_v2_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: kazuha-jp / added_IVF860_Flat_nprobe_1_kazuha_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: keqing-jp / added_IVF1634_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: kirara-jp / added_IVF672_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: klee-jp / added_IVF282_Flat_nprobe_5.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: kokomi-jp / added_IVF934_Flat_nprobe_1_kokomi_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: lumine-jp / added_IVF938_Flat_nprobe_1_lumine-jp_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: mona-jp / added_IVF2165_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: nahida-jp / added_IVF1062_Flat_nprobe_1_nahida-v2_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: nilou-jp / added_IVF218_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: paimon-jp / added_IVF3904_Flat_nprobe_1_paimon-jp_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: raiden-jp / added_IVF4256_Flat_nprobe_1_raiden-jp_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: signora-jp / added_IVF478_Flat_nprobe_1_signora-jp_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: sucrose-jp / added_IVF884_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: thoma-jp / added_IVF366_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: tighnari-jp / added_IVF446_Flat_nprobe_1_tignari-jp_v1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: venti-jp / added_IVF3591_Flat_nprobe_1_venti-jp_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: wanderer-jp / added_IVF953_Flat_nprobe_1_wanderer-v2_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: xiao-jp / added_IVF3205_Flat_nprobe_1_xiao-jp_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: yae-jp / added_IVF1097_Flat_nprobe_1_yae-v2_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: yanfei-jp / added_IVF1271_Flat_nprobe_1_yanfei-v2_v2.index | (V2) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: yelan-jp / added_IVF2051_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: yoimiya-jp / added_IVF2034_Flat_nprobe_1.index | (V1) gin_channels: 256 self.spk_embed_dim: 109 <All keys matched successfully> Model loaded: zhongli-jp / added_IVF1672_Flat_nprobe_1.index | (V1) Running on local URL: http://0.0.0.0:7860 To create a public link, set `share=True` in `launch()`. [2023-07-10 17:37]: npy: 2.0945026874542236, f0: 0.05994224548339844s, infer: 17.599822521209717s [2023-07-10 17:38]: npy: 3.1487624645233154, f0: 0.022048234939575195s, infer: 25.596487760543823s [2023-07-10 17:39]: npy: 3.693798780441284, f0: 0.017490386962890625s, infer: 32.087180376052856s [2023-07-10 17:39]: npy: 2.5506346225738525, f0: 0.013794660568237305s, infer: 26.60752511024475s [2023-07-10 17:40]: npy: 2.6092371940612793, f0: 0.03858685493469238s, infer: 26.312453031539917s [2023-07-10 17:41]: npy: 2.615102767944336, f0: 0.03931307792663574s, infer: 26.40330672264099s [2023-07-10 17:43]: npy: 3.1028923988342285, f0: 0.05546903610229492s, infer: 32.91775321960449s [2023-07-10 17:44]: npy: 2.839845657348633, f0: 0.046269893646240234s, infer: 27.98230767250061s [2023-07-10 17:44]: npy: 3.3039710521698, f0: 0.020084142684936523s, infer: 29.59837293624878s [2023-07-10 17:45]: npy: 3.30319881439209, f0: 0.03941464424133301s, infer: 32.42077875137329s [2023-07-10 17:46]: npy: 2.90372371673584, f0: 0.0513463020324707s, infer: 28.517998695373535s [2023-07-10 17:47]: npy: 3.4118876457214355, f0: 0.10508394241333008s, infer: 31.312357664108276s [2023-07-10 17:47]: npy: 4.102552890777588, f0: 0.02527928352355957s, infer: 33.81402325630188s [2023-07-10 17:48]: npy: 2.4004595279693604, f0: 0.09933662414550781s, infer: 29.89732074737549s [2023-07-10 17:49]: npy: 3.2991466522216797, f0: 0.03225088119506836s, infer: 29.510783195495605s [2023-07-10 17:49]: npy: 3.4149115085601807, f0: 0.04070758819580078s, infer: 30.8032488822937s
jordyvl/vit-small_tobacco3482_kd_CEKD_t2.5_a0.5
jordyvl
2023-07-10T19:16:42Z
166
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-10T18:37:44Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-small_tobacco3482_kd_CEKD_t2.5_a0.5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-small_tobacco3482_kd_CEKD_t2.5_a0.5 This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4300 - Accuracy: 0.83 - Brier Loss: 0.2807 - Nll: 1.0350 - F1 Micro: 0.83 - F1 Macro: 0.8295 - Ece: 0.2287 - Aurc: 0.0560 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 7 | 1.6525 | 0.225 | 0.8757 | 5.3231 | 0.225 | 0.1387 | 0.2689 | 0.6977 | | No log | 2.0 | 14 | 1.3106 | 0.405 | 0.7470 | 3.3487 | 0.405 | 0.2195 | 0.2936 | 0.4032 | | No log | 3.0 | 21 | 0.9127 | 0.585 | 0.5785 | 1.8686 | 0.585 | 0.5142 | 0.2974 | 0.2067 | | No log | 4.0 | 28 | 0.7280 | 0.715 | 0.4339 | 1.6780 | 0.715 | 0.6761 | 0.2672 | 0.1204 | | No log | 5.0 | 35 | 0.6523 | 0.775 | 0.3676 | 1.6537 | 0.775 | 0.7619 | 0.2554 | 0.0929 | | No log | 6.0 | 42 | 0.5888 | 0.785 | 0.3502 | 1.3926 | 0.785 | 0.7538 | 0.2277 | 0.0908 | | No log | 7.0 | 49 | 0.6113 | 0.805 | 0.3326 | 1.7118 | 0.805 | 0.7903 | 0.2428 | 0.0803 | | No log | 8.0 | 56 | 0.5404 | 0.785 | 0.3178 | 1.1557 | 0.785 | 0.7671 | 0.2183 | 0.0716 | | No log | 9.0 | 63 | 0.5380 | 0.82 | 0.3051 | 1.3231 | 0.82 | 0.8072 | 0.2168 | 0.0773 | | No log | 10.0 | 70 | 0.6035 | 0.775 | 0.3508 | 1.3888 | 0.775 | 0.7682 | 0.2191 | 0.0812 | | No log | 11.0 | 77 | 0.5473 | 0.795 | 0.3202 | 1.2622 | 0.795 | 0.7740 | 0.2303 | 0.0626 | | No log | 12.0 | 84 | 0.4860 | 0.825 | 0.2937 | 1.3575 | 0.825 | 0.8053 | 0.2392 | 0.0727 | | No log | 13.0 | 91 | 0.5046 | 0.81 | 0.3032 | 1.1857 | 0.81 | 0.8086 | 0.2248 | 0.0564 | | No log | 14.0 | 98 | 0.4745 | 0.825 | 0.2870 | 1.2338 | 0.825 | 0.8089 | 0.2441 | 0.0518 | | No log | 15.0 | 105 | 0.4764 | 0.81 | 0.2943 | 1.0325 | 0.81 | 0.8110 | 0.1935 | 0.0556 | | No log | 16.0 | 112 | 0.4918 | 0.81 | 0.3062 | 1.0551 | 0.81 | 0.8015 | 0.2198 | 0.0587 | | No log | 17.0 | 119 | 0.4757 | 0.815 | 0.2970 | 1.4203 | 0.815 | 0.7965 | 0.2263 | 0.0850 | | No log | 18.0 | 126 | 0.4586 | 0.825 | 0.2926 | 1.0361 | 0.825 | 0.8268 | 0.2279 | 0.0583 | | No log | 19.0 | 133 | 0.4503 | 0.835 | 0.2855 | 1.1476 | 0.835 | 0.8301 | 0.2392 | 0.0589 | | No log | 20.0 | 140 | 0.4780 | 0.805 | 0.3105 | 0.9928 | 0.805 | 0.7902 | 0.1988 | 0.0775 | | No log | 21.0 | 147 | 0.4965 | 0.8 | 0.3205 | 1.1887 | 0.8000 | 0.8029 | 0.2410 | 0.0702 | | No log | 22.0 | 154 | 0.4753 | 0.815 | 0.3016 | 0.9609 | 0.815 | 0.8169 | 0.2163 | 0.0580 | | No log | 23.0 | 161 | 0.4733 | 0.8 | 0.3074 | 1.2566 | 0.8000 | 0.8001 | 0.2162 | 0.0704 | | No log | 24.0 | 168 | 0.4472 | 0.815 | 0.2888 | 1.0352 | 0.815 | 0.8187 | 0.2317 | 0.0590 | | No log | 25.0 | 175 | 0.4434 | 0.815 | 0.2854 | 0.9874 | 0.815 | 0.8186 | 0.2149 | 0.0554 | | No log | 26.0 | 182 | 0.4316 | 0.82 | 0.2754 | 1.0477 | 0.82 | 0.8267 | 0.2195 | 0.0508 | | No log | 27.0 | 189 | 0.4276 | 0.83 | 0.2751 | 1.1016 | 0.83 | 0.8336 | 0.2050 | 0.0525 | | No log | 28.0 | 196 | 0.4329 | 0.82 | 0.2795 | 1.0537 | 0.82 | 0.8220 | 0.2158 | 0.0611 | | No log | 29.0 | 203 | 0.4327 | 0.82 | 0.2827 | 1.1766 | 0.82 | 0.8237 | 0.2024 | 0.0603 | | No log | 30.0 | 210 | 0.4317 | 0.82 | 0.2820 | 1.0331 | 0.82 | 0.8219 | 0.2083 | 0.0611 | | No log | 31.0 | 217 | 0.4316 | 0.82 | 0.2803 | 1.0974 | 0.82 | 0.8263 | 0.1984 | 0.0575 | | No log | 32.0 | 224 | 0.4340 | 0.82 | 0.2833 | 1.0384 | 0.82 | 0.8240 | 0.2202 | 0.0590 | | No log | 33.0 | 231 | 0.4333 | 0.81 | 0.2824 | 1.0355 | 0.81 | 0.8160 | 0.2103 | 0.0586 | | No log | 34.0 | 238 | 0.4309 | 0.83 | 0.2817 | 1.1015 | 0.83 | 0.8307 | 0.2107 | 0.0577 | | No log | 35.0 | 245 | 0.4321 | 0.82 | 0.2817 | 1.0359 | 0.82 | 0.8229 | 0.2147 | 0.0590 | | No log | 36.0 | 252 | 0.4304 | 0.825 | 0.2802 | 1.1016 | 0.825 | 0.8257 | 0.2137 | 0.0569 | | No log | 37.0 | 259 | 0.4303 | 0.825 | 0.2811 | 1.0990 | 0.825 | 0.8268 | 0.2149 | 0.0581 | | No log | 38.0 | 266 | 0.4314 | 0.825 | 0.2814 | 1.1003 | 0.825 | 0.8257 | 0.2163 | 0.0581 | | No log | 39.0 | 273 | 0.4302 | 0.82 | 0.2806 | 1.1007 | 0.82 | 0.8226 | 0.2102 | 0.0576 | | No log | 40.0 | 280 | 0.4307 | 0.825 | 0.2809 | 1.0376 | 0.825 | 0.8264 | 0.2049 | 0.0573 | | No log | 41.0 | 287 | 0.4303 | 0.82 | 0.2808 | 1.0434 | 0.82 | 0.8226 | 0.2096 | 0.0574 | | No log | 42.0 | 294 | 0.4310 | 0.825 | 0.2817 | 1.0376 | 0.825 | 0.8268 | 0.2140 | 0.0580 | | No log | 43.0 | 301 | 0.4310 | 0.825 | 0.2813 | 1.0391 | 0.825 | 0.8257 | 0.2147 | 0.0580 | | No log | 44.0 | 308 | 0.4301 | 0.825 | 0.2808 | 1.0389 | 0.825 | 0.8257 | 0.2064 | 0.0573 | | No log | 45.0 | 315 | 0.4305 | 0.83 | 0.2811 | 1.0419 | 0.83 | 0.8307 | 0.2300 | 0.0577 | | No log | 46.0 | 322 | 0.4303 | 0.82 | 0.2808 | 1.0423 | 0.82 | 0.8226 | 0.2197 | 0.0582 | | No log | 47.0 | 329 | 0.4304 | 0.825 | 0.2811 | 1.0405 | 0.825 | 0.8257 | 0.2240 | 0.0580 | | No log | 48.0 | 336 | 0.4300 | 0.82 | 0.2805 | 1.0407 | 0.82 | 0.8226 | 0.2105 | 0.0574 | | No log | 49.0 | 343 | 0.4307 | 0.825 | 0.2812 | 1.0381 | 0.825 | 0.8257 | 0.2252 | 0.0577 | | No log | 50.0 | 350 | 0.4304 | 0.82 | 0.2810 | 1.0422 | 0.82 | 0.8226 | 0.2353 | 0.0578 | | No log | 51.0 | 357 | 0.4310 | 0.825 | 0.2813 | 1.0382 | 0.825 | 0.8264 | 0.2153 | 0.0569 | | No log | 52.0 | 364 | 0.4309 | 0.82 | 0.2814 | 1.0380 | 0.82 | 0.8226 | 0.2282 | 0.0574 | | No log | 53.0 | 371 | 0.4307 | 0.825 | 0.2813 | 1.0357 | 0.825 | 0.8264 | 0.2250 | 0.0568 | | No log | 54.0 | 378 | 0.4305 | 0.82 | 0.2810 | 1.0366 | 0.82 | 0.8226 | 0.2284 | 0.0575 | | No log | 55.0 | 385 | 0.4304 | 0.825 | 0.2811 | 1.0351 | 0.825 | 0.8264 | 0.2241 | 0.0566 | | No log | 56.0 | 392 | 0.4308 | 0.825 | 0.2813 | 1.0369 | 0.825 | 0.8257 | 0.2414 | 0.0572 | | No log | 57.0 | 399 | 0.4305 | 0.825 | 0.2810 | 1.0356 | 0.825 | 0.8257 | 0.2322 | 0.0571 | | No log | 58.0 | 406 | 0.4302 | 0.82 | 0.2808 | 1.0359 | 0.82 | 0.8226 | 0.2368 | 0.0569 | | No log | 59.0 | 413 | 0.4302 | 0.82 | 0.2809 | 1.0346 | 0.82 | 0.8226 | 0.2271 | 0.0569 | | No log | 60.0 | 420 | 0.4303 | 0.82 | 0.2809 | 1.0357 | 0.82 | 0.8226 | 0.2272 | 0.0570 | | No log | 61.0 | 427 | 0.4304 | 0.825 | 0.2810 | 1.0360 | 0.825 | 0.8257 | 0.2325 | 0.0569 | | No log | 62.0 | 434 | 0.4303 | 0.825 | 0.2809 | 1.0360 | 0.825 | 0.8257 | 0.2321 | 0.0568 | | No log | 63.0 | 441 | 0.4303 | 0.83 | 0.2809 | 1.0356 | 0.83 | 0.8295 | 0.2300 | 0.0562 | | No log | 64.0 | 448 | 0.4304 | 0.825 | 0.2810 | 1.0347 | 0.825 | 0.8264 | 0.2242 | 0.0564 | | No log | 65.0 | 455 | 0.4301 | 0.83 | 0.2808 | 1.0361 | 0.83 | 0.8295 | 0.2384 | 0.0564 | | No log | 66.0 | 462 | 0.4303 | 0.83 | 0.2810 | 1.0359 | 0.83 | 0.8295 | 0.2293 | 0.0563 | | No log | 67.0 | 469 | 0.4302 | 0.83 | 0.2809 | 1.0360 | 0.83 | 0.8295 | 0.2386 | 0.0564 | | No log | 68.0 | 476 | 0.4304 | 0.83 | 0.2810 | 1.0360 | 0.83 | 0.8295 | 0.2384 | 0.0563 | | No log | 69.0 | 483 | 0.4305 | 0.83 | 0.2812 | 1.0355 | 0.83 | 0.8295 | 0.2295 | 0.0564 | | No log | 70.0 | 490 | 0.4302 | 0.825 | 0.2808 | 1.0354 | 0.825 | 0.8264 | 0.2239 | 0.0561 | | No log | 71.0 | 497 | 0.4305 | 0.83 | 0.2812 | 1.0352 | 0.83 | 0.8295 | 0.2296 | 0.0564 | | 0.1776 | 72.0 | 504 | 0.4303 | 0.83 | 0.2808 | 1.0356 | 0.83 | 0.8295 | 0.2287 | 0.0561 | | 0.1776 | 73.0 | 511 | 0.4301 | 0.825 | 0.2807 | 1.0351 | 0.825 | 0.8264 | 0.2348 | 0.0563 | | 0.1776 | 74.0 | 518 | 0.4304 | 0.83 | 0.2811 | 1.0353 | 0.83 | 0.8295 | 0.2195 | 0.0562 | | 0.1776 | 75.0 | 525 | 0.4301 | 0.825 | 0.2808 | 1.0355 | 0.825 | 0.8257 | 0.2320 | 0.0568 | | 0.1776 | 76.0 | 532 | 0.4302 | 0.83 | 0.2808 | 1.0348 | 0.83 | 0.8295 | 0.2289 | 0.0561 | | 0.1776 | 77.0 | 539 | 0.4301 | 0.83 | 0.2808 | 1.0355 | 0.83 | 0.8295 | 0.2300 | 0.0562 | | 0.1776 | 78.0 | 546 | 0.4301 | 0.83 | 0.2808 | 1.0354 | 0.83 | 0.8295 | 0.2394 | 0.0563 | | 0.1776 | 79.0 | 553 | 0.4302 | 0.83 | 0.2809 | 1.0346 | 0.83 | 0.8295 | 0.2287 | 0.0560 | | 0.1776 | 80.0 | 560 | 0.4302 | 0.83 | 0.2809 | 1.0353 | 0.83 | 0.8295 | 0.2299 | 0.0563 | | 0.1776 | 81.0 | 567 | 0.4302 | 0.83 | 0.2809 | 1.0350 | 0.83 | 0.8295 | 0.2299 | 0.0563 | | 0.1776 | 82.0 | 574 | 0.4302 | 0.83 | 0.2808 | 1.0354 | 0.83 | 0.8295 | 0.2298 | 0.0560 | | 0.1776 | 83.0 | 581 | 0.4302 | 0.83 | 0.2809 | 1.0350 | 0.83 | 0.8295 | 0.2299 | 0.0561 | | 0.1776 | 84.0 | 588 | 0.4299 | 0.83 | 0.2807 | 1.0352 | 0.83 | 0.8295 | 0.2287 | 0.0561 | | 0.1776 | 85.0 | 595 | 0.4301 | 0.83 | 0.2808 | 1.0349 | 0.83 | 0.8295 | 0.2296 | 0.0562 | | 0.1776 | 86.0 | 602 | 0.4301 | 0.83 | 0.2808 | 1.0351 | 0.83 | 0.8295 | 0.2287 | 0.0562 | | 0.1776 | 87.0 | 609 | 0.4300 | 0.83 | 0.2807 | 1.0351 | 0.83 | 0.8295 | 0.2297 | 0.0561 | | 0.1776 | 88.0 | 616 | 0.4300 | 0.83 | 0.2807 | 1.0349 | 0.83 | 0.8295 | 0.2287 | 0.0562 | | 0.1776 | 89.0 | 623 | 0.4300 | 0.83 | 0.2807 | 1.0353 | 0.83 | 0.8295 | 0.2296 | 0.0560 | | 0.1776 | 90.0 | 630 | 0.4300 | 0.83 | 0.2807 | 1.0349 | 0.83 | 0.8295 | 0.2297 | 0.0559 | | 0.1776 | 91.0 | 637 | 0.4300 | 0.83 | 0.2807 | 1.0352 | 0.83 | 0.8295 | 0.2296 | 0.0562 | | 0.1776 | 92.0 | 644 | 0.4300 | 0.83 | 0.2807 | 1.0351 | 0.83 | 0.8295 | 0.2287 | 0.0561 | | 0.1776 | 93.0 | 651 | 0.4300 | 0.83 | 0.2807 | 1.0351 | 0.83 | 0.8295 | 0.2297 | 0.0562 | | 0.1776 | 94.0 | 658 | 0.4300 | 0.83 | 0.2807 | 1.0349 | 0.83 | 0.8295 | 0.2297 | 0.0560 | | 0.1776 | 95.0 | 665 | 0.4300 | 0.83 | 0.2807 | 1.0350 | 0.83 | 0.8295 | 0.2297 | 0.0562 | | 0.1776 | 96.0 | 672 | 0.4300 | 0.83 | 0.2807 | 1.0349 | 0.83 | 0.8295 | 0.2296 | 0.0561 | | 0.1776 | 97.0 | 679 | 0.4300 | 0.83 | 0.2807 | 1.0350 | 0.83 | 0.8295 | 0.2296 | 0.0560 | | 0.1776 | 98.0 | 686 | 0.4300 | 0.83 | 0.2807 | 1.0350 | 0.83 | 0.8295 | 0.2296 | 0.0560 | | 0.1776 | 99.0 | 693 | 0.4300 | 0.83 | 0.2807 | 1.0350 | 0.83 | 0.8295 | 0.2287 | 0.0560 | | 0.1776 | 100.0 | 700 | 0.4300 | 0.83 | 0.2807 | 1.0350 | 0.83 | 0.8295 | 0.2287 | 0.0560 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
NasimB/gpt2-dp-all-mod-datasets-rarity-all-iorder-13k-2p6k
NasimB
2023-07-10T19:10:16Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-10T16:50:38Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-dp-all-mod-datasets-rarity-all-iorder-13k-2p6k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-dp-all-mod-datasets-rarity-all-iorder-13k-2p6k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.4226 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7606 | 0.29 | 500 | 5.6940 | | 5.4347 | 0.59 | 1000 | 5.2560 | | 5.0945 | 0.88 | 1500 | 5.0226 | | 4.8232 | 1.18 | 2000 | 4.8777 | | 4.675 | 1.47 | 2500 | 4.7626 | | 4.5767 | 1.77 | 3000 | 4.6625 | | 4.4488 | 2.06 | 3500 | 4.5933 | | 4.2612 | 2.36 | 4000 | 4.5563 | | 4.245 | 2.65 | 4500 | 4.4882 | | 4.208 | 2.94 | 5000 | 4.4332 | | 3.9773 | 3.24 | 5500 | 4.4362 | | 3.9484 | 3.53 | 6000 | 4.4046 | | 3.9304 | 3.83 | 6500 | 4.3669 | | 3.7943 | 4.12 | 7000 | 4.3731 | | 3.6517 | 4.42 | 7500 | 4.3646 | | 3.646 | 4.71 | 8000 | 4.3456 | | 3.6381 | 5.01 | 8500 | 4.3333 | | 3.3812 | 5.3 | 9000 | 4.3586 | | 3.3875 | 5.59 | 9500 | 4.3536 | | 3.3847 | 5.89 | 10000 | 4.3483 | | 3.2816 | 6.18 | 10500 | 4.3600 | | 3.2295 | 6.48 | 11000 | 4.3636 | | 3.223 | 6.77 | 11500 | 4.3630 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
MaitreHibou/Reinforce-Cartpole-v1
MaitreHibou
2023-07-10T18:49:32Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T18:49:23Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
simonestradasch/COMPner-bert-base-spanish-wwm-cased
simonestradasch
2023-07-10T18:28:38Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "es", "dataset:simonestradasch/NERcomp", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-10T18:07:06Z
--- language: - es tags: - generated_from_trainer datasets: - simonestradasch/NERcomp model-index: - name: COMPner-bert-base-spanish-wwm-cased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # COMPner-bert-base-spanish-wwm-cased This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the simonestradasch/NERcomp dataset. It achieves the following results on the evaluation set: - Loss: 0.2793 - Body Part Precision: 0.6700 - Body Part Recall: 0.7186 - Body Part F1: 0.6934 - Body Part Number: 565 - Disease Precision: 0.6966 - Disease Recall: 0.7533 - Disease F1: 0.7238 - Disease Number: 1350 - Family Member Precision: 0.9 - Family Member Recall: 0.75 - Family Member F1: 0.8182 - Family Member Number: 24 - Medication Precision: 0.7143 - Medication Recall: 0.6190 - Medication F1: 0.6633 - Medication Number: 105 - Procedure Precision: 0.5233 - Procedure Recall: 0.5125 - Procedure F1: 0.5178 - Procedure Number: 439 - Overall Precision: 0.6640 - Overall Recall: 0.6971 - Overall F1: 0.6802 - Overall Accuracy: 0.9136 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 13 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.4741 | 1.0 | 703 | 0.2932 | 0.6449 | 0.6301 | 0.6374 | 565 | 0.6984 | 0.7170 | 0.7076 | 1350 | 0.9412 | 0.6667 | 0.7805 | 24 | 0.8551 | 0.5619 | 0.6782 | 105 | 0.5113 | 0.3599 | 0.4225 | 439 | 0.6674 | 0.6271 | 0.6466 | 0.9091 | | 0.259 | 2.0 | 1406 | 0.2793 | 0.6700 | 0.7186 | 0.6934 | 565 | 0.6966 | 0.7533 | 0.7238 | 1350 | 0.9 | 0.75 | 0.8182 | 24 | 0.7143 | 0.6190 | 0.6633 | 105 | 0.5233 | 0.5125 | 0.5178 | 439 | 0.6640 | 0.6971 | 0.6802 | 0.9136 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
skrl/IsaacGymEnvs-AnymalTerrain-PPO
skrl
2023-07-10T18:15:29Z
0
0
skrl
[ "skrl", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-24T20:41:55Z
--- library_name: skrl tags: - deep-reinforcement-learning - reinforcement-learning - skrl model-index: - name: PPO results: - metrics: - type: mean_reward value: 19.88 +/- 0.5 name: Total reward (mean) task: type: reinforcement-learning name: reinforcement-learning dataset: name: IsaacGymEnvs-AnymalTerrain type: IsaacGymEnvs-AnymalTerrain --- <!-- --- torch: 19.88 +/- 0.5 jax: 17.24 +/- 0.62 numpy: 17.8 +/- 0.29 --- --> # IsaacGymEnvs-AnymalTerrain-PPO Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments. - **Task:** AnymalTerrain - **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html) # Usage (with skrl) Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts. * PyTorch ```python from skrl.utils.huggingface import download_model_from_huggingface # assuming that there is an agent named `agent` path = download_model_from_huggingface("skrl/IsaacGymEnvs-AnymalTerrain-PPO", filename="agent.pt") agent.load(path) ``` * JAX ```python from skrl.utils.huggingface import download_model_from_huggingface # assuming that there is an agent named `agent` path = download_model_from_huggingface("skrl/IsaacGymEnvs-AnymalTerrain-PPO", filename="agent.pickle") agent.load(path) ``` # Hyperparameters Note: Undefined parameters keep their values by default. ```python # https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters cfg = PPO_DEFAULT_CONFIG.copy() cfg["rollouts"] = 24 # memory_size cfg["learning_epochs"] = 5 cfg["mini_batches"] = 6 # 24 * 4096 / 16384 cfg["discount_factor"] = 0.99 cfg["lambda"] = 0.95 cfg["learning_rate"] = 3e-4 cfg["learning_rate_scheduler"] = KLAdaptiveRL cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.008} cfg["random_timesteps"] = 0 cfg["learning_starts"] = 0 cfg["grad_norm_clip"] = 1.0 cfg["ratio_clip"] = 0.2 cfg["value_clip"] = 0.2 cfg["clip_predicted_values"] = True cfg["entropy_loss_scale"] = 0.001 cfg["value_loss_scale"] = 1.0 cfg["kl_threshold"] = 0 cfg["rewards_shaper"] = None cfg["state_preprocessor"] = RunningStandardScaler cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device} cfg["value_preprocessor"] = RunningStandardScaler cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device} ```
TK192828/Bangable-Inanimate-Insanity-2-Microphone
TK192828
2023-07-10T18:15:12Z
0
0
transformers
[ "transformers", "code", "text-generation", "am", "dataset:fka/awesome-chatgpt-prompts", "license:bsl-1.0", "endpoints_compatible", "region:us" ]
text-generation
2023-07-10T17:28:35Z
--- title: '[NSFW] C0ffee''s Erotic Story Generator 2' emoji: 🍑 colorFrom: gray colorTo: pink sdk: gradio sdk_version: 3.27.0 app_file: app.py pinned: true license: bsl-1.0 datasets: - fka/awesome-chatgpt-prompts metrics: - character pipeline_tag: text-generation language: - am library_name: transformers tags: - code --- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
jordyvl/vit-small_tobacco3482_kd_CEKD_t1.5_a0.7
jordyvl
2023-07-10T17:57:06Z
166
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-10T17:18:33Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-small_tobacco3482_kd_CEKD_t1.5_a0.7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-small_tobacco3482_kd_CEKD_t1.5_a0.7 This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4797 - Accuracy: 0.835 - Brier Loss: 0.2522 - Nll: 0.8627 - F1 Micro: 0.835 - F1 Macro: 0.8222 - Ece: 0.1830 - Aurc: 0.0434 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 7 | 1.9341 | 0.215 | 0.8749 | 5.3238 | 0.2150 | 0.1264 | 0.2642 | 0.6914 | | No log | 2.0 | 14 | 1.5320 | 0.405 | 0.7410 | 3.5078 | 0.405 | 0.2276 | 0.2957 | 0.4015 | | No log | 3.0 | 21 | 1.0532 | 0.635 | 0.5629 | 2.0153 | 0.635 | 0.5844 | 0.3037 | 0.2006 | | No log | 4.0 | 28 | 0.7915 | 0.715 | 0.4093 | 1.6974 | 0.715 | 0.6762 | 0.2420 | 0.1131 | | No log | 5.0 | 35 | 0.8024 | 0.745 | 0.3869 | 1.7109 | 0.745 | 0.7548 | 0.2160 | 0.1006 | | No log | 6.0 | 42 | 0.7162 | 0.765 | 0.3351 | 1.8105 | 0.765 | 0.7599 | 0.2216 | 0.0874 | | No log | 7.0 | 49 | 0.6966 | 0.785 | 0.3304 | 1.5292 | 0.785 | 0.7682 | 0.2058 | 0.0979 | | No log | 8.0 | 56 | 0.6317 | 0.805 | 0.2995 | 1.3486 | 0.805 | 0.7887 | 0.2266 | 0.0721 | | No log | 9.0 | 63 | 0.6903 | 0.805 | 0.3304 | 1.5866 | 0.805 | 0.7971 | 0.2371 | 0.0995 | | No log | 10.0 | 70 | 0.6223 | 0.805 | 0.2940 | 1.3478 | 0.805 | 0.8114 | 0.2281 | 0.0697 | | No log | 11.0 | 77 | 0.6350 | 0.795 | 0.3145 | 1.3386 | 0.795 | 0.7730 | 0.2063 | 0.0962 | | No log | 12.0 | 84 | 0.5570 | 0.835 | 0.2666 | 1.2662 | 0.835 | 0.8181 | 0.1951 | 0.0553 | | No log | 13.0 | 91 | 0.5610 | 0.81 | 0.2858 | 1.2619 | 0.81 | 0.8002 | 0.1884 | 0.0626 | | No log | 14.0 | 98 | 0.5843 | 0.8 | 0.2961 | 1.0782 | 0.8000 | 0.8083 | 0.1993 | 0.0683 | | No log | 15.0 | 105 | 0.5918 | 0.78 | 0.2965 | 1.1207 | 0.78 | 0.7861 | 0.1895 | 0.0634 | | No log | 16.0 | 112 | 0.5541 | 0.84 | 0.2765 | 1.3189 | 0.8400 | 0.8455 | 0.1969 | 0.0597 | | No log | 17.0 | 119 | 0.5037 | 0.835 | 0.2568 | 0.9024 | 0.835 | 0.8248 | 0.2083 | 0.0499 | | No log | 18.0 | 126 | 0.5050 | 0.85 | 0.2563 | 1.0032 | 0.85 | 0.8441 | 0.2147 | 0.0580 | | No log | 19.0 | 133 | 0.5430 | 0.815 | 0.2779 | 1.1046 | 0.815 | 0.8044 | 0.1906 | 0.0562 | | No log | 20.0 | 140 | 0.5276 | 0.84 | 0.2743 | 0.9964 | 0.8400 | 0.8144 | 0.2104 | 0.0597 | | No log | 21.0 | 147 | 0.5155 | 0.835 | 0.2686 | 0.9556 | 0.835 | 0.8210 | 0.1962 | 0.0572 | | No log | 22.0 | 154 | 0.4937 | 0.835 | 0.2581 | 1.0079 | 0.835 | 0.8172 | 0.1975 | 0.0479 | | No log | 23.0 | 161 | 0.4931 | 0.845 | 0.2533 | 1.0021 | 0.845 | 0.8270 | 0.1884 | 0.0503 | | No log | 24.0 | 168 | 0.4869 | 0.83 | 0.2554 | 0.9660 | 0.83 | 0.8084 | 0.1945 | 0.0481 | | No log | 25.0 | 175 | 0.4843 | 0.845 | 0.2512 | 0.9979 | 0.845 | 0.8316 | 0.1746 | 0.0466 | | No log | 26.0 | 182 | 0.4866 | 0.835 | 0.2531 | 0.9006 | 0.835 | 0.8188 | 0.1833 | 0.0472 | | No log | 27.0 | 189 | 0.4882 | 0.825 | 0.2562 | 0.8929 | 0.825 | 0.8043 | 0.2023 | 0.0469 | | No log | 28.0 | 196 | 0.4814 | 0.82 | 0.2494 | 0.9122 | 0.82 | 0.8060 | 0.1773 | 0.0451 | | No log | 29.0 | 203 | 0.4749 | 0.835 | 0.2501 | 0.8770 | 0.835 | 0.8252 | 0.1688 | 0.0442 | | No log | 30.0 | 210 | 0.4761 | 0.84 | 0.2490 | 0.8848 | 0.8400 | 0.8250 | 0.2068 | 0.0443 | | No log | 31.0 | 217 | 0.4787 | 0.845 | 0.2508 | 0.8754 | 0.845 | 0.8309 | 0.1635 | 0.0438 | | No log | 32.0 | 224 | 0.4791 | 0.835 | 0.2521 | 0.8711 | 0.835 | 0.8224 | 0.1876 | 0.0446 | | No log | 33.0 | 231 | 0.4779 | 0.84 | 0.2509 | 0.8650 | 0.8400 | 0.8252 | 0.1813 | 0.0436 | | No log | 34.0 | 238 | 0.4774 | 0.84 | 0.2513 | 0.8662 | 0.8400 | 0.8252 | 0.1919 | 0.0441 | | No log | 35.0 | 245 | 0.4760 | 0.835 | 0.2502 | 0.8636 | 0.835 | 0.8224 | 0.1840 | 0.0434 | | No log | 36.0 | 252 | 0.4784 | 0.84 | 0.2509 | 0.8688 | 0.8400 | 0.8281 | 0.1691 | 0.0437 | | No log | 37.0 | 259 | 0.4771 | 0.835 | 0.2507 | 0.8670 | 0.835 | 0.8224 | 0.1936 | 0.0440 | | No log | 38.0 | 266 | 0.4764 | 0.835 | 0.2499 | 0.8614 | 0.835 | 0.8224 | 0.1830 | 0.0434 | | No log | 39.0 | 273 | 0.4769 | 0.835 | 0.2503 | 0.8651 | 0.835 | 0.8224 | 0.2001 | 0.0438 | | No log | 40.0 | 280 | 0.4777 | 0.84 | 0.2514 | 0.8608 | 0.8400 | 0.8281 | 0.1832 | 0.0435 | | No log | 41.0 | 287 | 0.4777 | 0.835 | 0.2504 | 0.8650 | 0.835 | 0.8224 | 0.1953 | 0.0437 | | No log | 42.0 | 294 | 0.4779 | 0.835 | 0.2511 | 0.8629 | 0.835 | 0.8224 | 0.1944 | 0.0440 | | No log | 43.0 | 301 | 0.4790 | 0.835 | 0.2519 | 0.8631 | 0.835 | 0.8222 | 0.1808 | 0.0439 | | No log | 44.0 | 308 | 0.4777 | 0.835 | 0.2509 | 0.8604 | 0.835 | 0.8222 | 0.1886 | 0.0435 | | No log | 45.0 | 315 | 0.4787 | 0.835 | 0.2517 | 0.8620 | 0.835 | 0.8222 | 0.1940 | 0.0437 | | No log | 46.0 | 322 | 0.4774 | 0.84 | 0.2509 | 0.8614 | 0.8400 | 0.8281 | 0.1779 | 0.0433 | | No log | 47.0 | 329 | 0.4785 | 0.835 | 0.2517 | 0.8609 | 0.835 | 0.8222 | 0.1811 | 0.0438 | | No log | 48.0 | 336 | 0.4792 | 0.835 | 0.2521 | 0.8611 | 0.835 | 0.8222 | 0.1849 | 0.0436 | | No log | 49.0 | 343 | 0.4771 | 0.84 | 0.2509 | 0.8623 | 0.8400 | 0.8281 | 0.1908 | 0.0430 | | No log | 50.0 | 350 | 0.4793 | 0.835 | 0.2520 | 0.8633 | 0.835 | 0.8222 | 0.1900 | 0.0435 | | No log | 51.0 | 357 | 0.4786 | 0.83 | 0.2517 | 0.8654 | 0.83 | 0.8159 | 0.1684 | 0.0437 | | No log | 52.0 | 364 | 0.4792 | 0.83 | 0.2521 | 0.8625 | 0.83 | 0.8166 | 0.1915 | 0.0430 | | No log | 53.0 | 371 | 0.4785 | 0.835 | 0.2513 | 0.8652 | 0.835 | 0.8222 | 0.1853 | 0.0434 | | No log | 54.0 | 378 | 0.4798 | 0.835 | 0.2523 | 0.8652 | 0.835 | 0.8222 | 0.1767 | 0.0437 | | No log | 55.0 | 385 | 0.4791 | 0.835 | 0.2519 | 0.8637 | 0.835 | 0.8222 | 0.1891 | 0.0435 | | No log | 56.0 | 392 | 0.4790 | 0.835 | 0.2519 | 0.8614 | 0.835 | 0.8222 | 0.1749 | 0.0429 | | No log | 57.0 | 399 | 0.4782 | 0.835 | 0.2513 | 0.8625 | 0.835 | 0.8222 | 0.1909 | 0.0433 | | No log | 58.0 | 406 | 0.4794 | 0.835 | 0.2521 | 0.8602 | 0.835 | 0.8222 | 0.1758 | 0.0435 | | No log | 59.0 | 413 | 0.4790 | 0.835 | 0.2517 | 0.8617 | 0.835 | 0.8222 | 0.1754 | 0.0432 | | No log | 60.0 | 420 | 0.4791 | 0.835 | 0.2520 | 0.8614 | 0.835 | 0.8222 | 0.1830 | 0.0430 | | No log | 61.0 | 427 | 0.4789 | 0.835 | 0.2518 | 0.8612 | 0.835 | 0.8222 | 0.1870 | 0.0432 | | No log | 62.0 | 434 | 0.4792 | 0.835 | 0.2520 | 0.8620 | 0.835 | 0.8222 | 0.1902 | 0.0433 | | No log | 63.0 | 441 | 0.4789 | 0.835 | 0.2518 | 0.8619 | 0.835 | 0.8222 | 0.1997 | 0.0431 | | No log | 64.0 | 448 | 0.4797 | 0.835 | 0.2523 | 0.8607 | 0.835 | 0.8222 | 0.1833 | 0.0434 | | No log | 65.0 | 455 | 0.4797 | 0.835 | 0.2522 | 0.8624 | 0.835 | 0.8222 | 0.1922 | 0.0434 | | No log | 66.0 | 462 | 0.4791 | 0.835 | 0.2519 | 0.8620 | 0.835 | 0.8222 | 0.1894 | 0.0430 | | No log | 67.0 | 469 | 0.4792 | 0.835 | 0.2520 | 0.8612 | 0.835 | 0.8222 | 0.1885 | 0.0433 | | No log | 68.0 | 476 | 0.4796 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1918 | 0.0433 | | No log | 69.0 | 483 | 0.4793 | 0.835 | 0.2521 | 0.8628 | 0.835 | 0.8222 | 0.1828 | 0.0433 | | No log | 70.0 | 490 | 0.4792 | 0.835 | 0.2519 | 0.8622 | 0.835 | 0.8222 | 0.1918 | 0.0432 | | No log | 71.0 | 497 | 0.4797 | 0.835 | 0.2523 | 0.8615 | 0.835 | 0.8222 | 0.1836 | 0.0434 | | 0.194 | 72.0 | 504 | 0.4797 | 0.835 | 0.2522 | 0.8618 | 0.835 | 0.8222 | 0.1842 | 0.0433 | | 0.194 | 73.0 | 511 | 0.4794 | 0.835 | 0.2521 | 0.8624 | 0.835 | 0.8222 | 0.1914 | 0.0432 | | 0.194 | 74.0 | 518 | 0.4794 | 0.835 | 0.2521 | 0.8617 | 0.835 | 0.8222 | 0.1915 | 0.0431 | | 0.194 | 75.0 | 525 | 0.4796 | 0.835 | 0.2522 | 0.8623 | 0.835 | 0.8222 | 0.1917 | 0.0434 | | 0.194 | 76.0 | 532 | 0.4795 | 0.835 | 0.2520 | 0.8622 | 0.835 | 0.8222 | 0.1985 | 0.0433 | | 0.194 | 77.0 | 539 | 0.4795 | 0.835 | 0.2520 | 0.8623 | 0.835 | 0.8222 | 0.1985 | 0.0432 | | 0.194 | 78.0 | 546 | 0.4795 | 0.835 | 0.2522 | 0.8621 | 0.835 | 0.8222 | 0.1981 | 0.0432 | | 0.194 | 79.0 | 553 | 0.4798 | 0.835 | 0.2522 | 0.8626 | 0.835 | 0.8222 | 0.1909 | 0.0433 | | 0.194 | 80.0 | 560 | 0.4796 | 0.835 | 0.2521 | 0.8630 | 0.835 | 0.8222 | 0.1984 | 0.0433 | | 0.194 | 81.0 | 567 | 0.4797 | 0.835 | 0.2522 | 0.8619 | 0.835 | 0.8222 | 0.1902 | 0.0434 | | 0.194 | 82.0 | 574 | 0.4797 | 0.835 | 0.2522 | 0.8631 | 0.835 | 0.8222 | 0.1913 | 0.0433 | | 0.194 | 83.0 | 581 | 0.4797 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1909 | 0.0433 | | 0.194 | 84.0 | 588 | 0.4797 | 0.835 | 0.2522 | 0.8623 | 0.835 | 0.8222 | 0.1909 | 0.0433 | | 0.194 | 85.0 | 595 | 0.4797 | 0.835 | 0.2522 | 0.8624 | 0.835 | 0.8222 | 0.1909 | 0.0434 | | 0.194 | 86.0 | 602 | 0.4796 | 0.835 | 0.2522 | 0.8623 | 0.835 | 0.8222 | 0.1830 | 0.0433 | | 0.194 | 87.0 | 609 | 0.4797 | 0.835 | 0.2522 | 0.8629 | 0.835 | 0.8222 | 0.1909 | 0.0434 | | 0.194 | 88.0 | 616 | 0.4797 | 0.835 | 0.2521 | 0.8634 | 0.835 | 0.8222 | 0.1830 | 0.0433 | | 0.194 | 89.0 | 623 | 0.4797 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1910 | 0.0434 | | 0.194 | 90.0 | 630 | 0.4798 | 0.835 | 0.2523 | 0.8627 | 0.835 | 0.8222 | 0.1909 | 0.0434 | | 0.194 | 91.0 | 637 | 0.4797 | 0.835 | 0.2522 | 0.8625 | 0.835 | 0.8222 | 0.1909 | 0.0434 | | 0.194 | 92.0 | 644 | 0.4797 | 0.835 | 0.2522 | 0.8630 | 0.835 | 0.8222 | 0.1830 | 0.0434 | | 0.194 | 93.0 | 651 | 0.4798 | 0.835 | 0.2522 | 0.8629 | 0.835 | 0.8222 | 0.1910 | 0.0434 | | 0.194 | 94.0 | 658 | 0.4797 | 0.835 | 0.2522 | 0.8628 | 0.835 | 0.8222 | 0.1910 | 0.0434 | | 0.194 | 95.0 | 665 | 0.4797 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1910 | 0.0434 | | 0.194 | 96.0 | 672 | 0.4798 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1834 | 0.0435 | | 0.194 | 97.0 | 679 | 0.4797 | 0.835 | 0.2522 | 0.8628 | 0.835 | 0.8222 | 0.1830 | 0.0434 | | 0.194 | 98.0 | 686 | 0.4797 | 0.835 | 0.2522 | 0.8628 | 0.835 | 0.8222 | 0.1830 | 0.0434 | | 0.194 | 99.0 | 693 | 0.4797 | 0.835 | 0.2522 | 0.8628 | 0.835 | 0.8222 | 0.1830 | 0.0434 | | 0.194 | 100.0 | 700 | 0.4797 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1830 | 0.0434 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
FerhatDk/wav2vec2-base_music_speech_both_classification
FerhatDk
2023-07-10T17:56:34Z
167
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-10T17:00:30Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec2-base_music_speech_both_classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base_music_speech_both_classification This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0586 - Accuracy: 0.9847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - lr_scheduler_warmup_steps: 500 - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9458 | 1.0 | 66 | 0.8468 | 0.7405 | | 0.3785 | 2.0 | 132 | 0.2951 | 0.9771 | | 0.1762 | 3.0 | 198 | 0.2639 | 0.9313 | | 0.134 | 4.0 | 264 | 0.1084 | 0.9771 | | 0.0782 | 5.0 | 330 | 0.0877 | 0.9771 | | 0.0568 | 6.0 | 396 | 0.0912 | 0.9771 | | 0.0122 | 7.0 | 462 | 0.4056 | 0.9198 | | 0.059 | 8.0 | 528 | 0.0586 | 0.9847 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
RamonGOD/distilbert-base-uncased-finetuned-cola
RamonGOD
2023-07-10T17:32:17Z
62
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-10T17:00:10Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: RamonGOD/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # RamonGOD/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1891 - Validation Loss: 0.5654 - Train Matthews Correlation: 0.5209 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5243 | 0.4596 | 0.4917 | 0 | | 0.3246 | 0.5117 | 0.4896 | 1 | | 0.1891 | 0.5654 | 0.5209 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
cagarraz/rl_course_vizdoom_health_gathering_supreme
cagarraz
2023-07-10T17:23:21Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T17:23:08Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 3.94 +/- 0.20 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r cagarraz/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
gameofdimension/cs324-length-control
gameofdimension
2023-07-10T17:03:29Z
69
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "gpt2", "text-generation", "code", "text2text-generation", "en", "dataset:openwebtext", "arxiv:1910.09700", "license:bsd", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-26T07:40:55Z
--- license: bsd datasets: - openwebtext language: - en library_name: transformers pipeline_tag: text2text-generation metrics: - accuracy tags: - code --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
opendiffusion/sentimentcheck
opendiffusion
2023-07-10T16:58:49Z
0
0
tf-keras
[ "tf-keras", "bert", "region:us" ]
null
2023-05-11T18:26:04Z
# Intro OpenDiffusion's SentimentCheck is an AI model built upon Tensorflow+Keras+Pickles. SentimentCheck harnesses the power of deep learning algorithms to accurately classify sentiment in text, making it a flexible tool for businesses, researchers, and developers. ## Usage --- language: - en - nl - de - fr - it - es license: mit --- # bert-base-multilingual-uncased-sentiment This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a number of stars (between 1 and 5). This model is intended for direct use as a sentiment analysis model for product reviews in any of the six languages above, or for further finetuning on related sentiment analysis tasks. ## Training data Here is the number of product reviews we used for finetuning the model: | Language | Number of reviews | | -------- | ----------------- | | English | 150k | | Dutch | 80k | | German | 137k | | French | 140k | | Italian | 72k | | Spanish | 50k | ## Accuracy The finetuned model obtained the following accuracy on 5,000 held-out product reviews in each of the languages: - Accuracy (exact) is the exact match on the number of stars. - Accuracy (off-by-1) is the percentage of reviews where the number of stars the model predicts differs by a maximum of 1 from the number given by the human reviewer. | Language | Accuracy (exact) | Accuracy (off-by-1) | | -------- | ---------------------- | ------------------- | | English | 67% | 95% | Dutch | 57% | 93% | German | 61% | 94% | French | 59% | 94% | Italian | 59% | 95% | Spanish | 58% | 95%
jordyvl/vit-small_tobacco3482_kd_MSE
jordyvl
2023-07-10T16:38:44Z
163
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-10T15:58:52Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-small_tobacco3482_kd_MSE results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-small_tobacco3482_kd_MSE This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2131 - Accuracy: 0.84 - Brier Loss: 0.2974 - Nll: 0.8913 - F1 Micro: 0.8400 - F1 Macro: 0.8190 - Ece: 0.2456 - Aurc: 0.0512 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 7 | 1.4711 | 0.21 | 0.8898 | 6.2752 | 0.2100 | 0.1403 | 0.2702 | 0.7673 | | No log | 2.0 | 14 | 1.0769 | 0.41 | 0.8120 | 5.2446 | 0.41 | 0.2713 | 0.3253 | 0.5170 | | No log | 3.0 | 21 | 0.7901 | 0.51 | 0.7057 | 2.6186 | 0.51 | 0.4114 | 0.3359 | 0.3162 | | No log | 4.0 | 28 | 0.6044 | 0.61 | 0.5736 | 1.8428 | 0.61 | 0.4989 | 0.3358 | 0.1889 | | No log | 5.0 | 35 | 0.4605 | 0.7 | 0.5009 | 1.3395 | 0.7 | 0.6120 | 0.3587 | 0.1321 | | No log | 6.0 | 42 | 0.4484 | 0.73 | 0.4373 | 1.4781 | 0.7300 | 0.6394 | 0.2751 | 0.1150 | | No log | 7.0 | 49 | 0.4406 | 0.765 | 0.4180 | 1.1081 | 0.765 | 0.7193 | 0.3066 | 0.0981 | | No log | 8.0 | 56 | 0.3421 | 0.82 | 0.3575 | 0.9309 | 0.82 | 0.7764 | 0.2867 | 0.0703 | | No log | 9.0 | 63 | 0.4201 | 0.75 | 0.3973 | 1.5859 | 0.75 | 0.7562 | 0.2618 | 0.1051 | | No log | 10.0 | 70 | 0.4086 | 0.795 | 0.3775 | 1.2870 | 0.795 | 0.7701 | 0.3104 | 0.0691 | | No log | 11.0 | 77 | 0.2867 | 0.82 | 0.3251 | 1.2141 | 0.82 | 0.7996 | 0.2511 | 0.0683 | | No log | 12.0 | 84 | 0.2964 | 0.825 | 0.3233 | 1.0042 | 0.825 | 0.8028 | 0.2801 | 0.0538 | | No log | 13.0 | 91 | 0.3010 | 0.81 | 0.3351 | 1.0085 | 0.81 | 0.7735 | 0.2678 | 0.0584 | | No log | 14.0 | 98 | 0.2741 | 0.835 | 0.3194 | 1.0574 | 0.835 | 0.8127 | 0.2982 | 0.0542 | | No log | 15.0 | 105 | 0.2524 | 0.845 | 0.3228 | 1.1162 | 0.845 | 0.8225 | 0.2911 | 0.0568 | | No log | 16.0 | 112 | 0.2652 | 0.83 | 0.3154 | 0.8145 | 0.83 | 0.8130 | 0.2786 | 0.0516 | | No log | 17.0 | 119 | 0.2478 | 0.83 | 0.3241 | 1.1158 | 0.83 | 0.8034 | 0.2776 | 0.0683 | | No log | 18.0 | 126 | 0.2526 | 0.85 | 0.3112 | 1.0132 | 0.85 | 0.8324 | 0.2757 | 0.0517 | | No log | 19.0 | 133 | 0.2423 | 0.855 | 0.3023 | 1.0623 | 0.855 | 0.8382 | 0.2727 | 0.0561 | | No log | 20.0 | 140 | 0.2294 | 0.83 | 0.3112 | 1.1134 | 0.83 | 0.8139 | 0.2697 | 0.0703 | | No log | 21.0 | 147 | 0.2380 | 0.835 | 0.3080 | 0.9961 | 0.835 | 0.8190 | 0.2841 | 0.0489 | | No log | 22.0 | 154 | 0.2362 | 0.84 | 0.3034 | 0.9586 | 0.8400 | 0.8145 | 0.2626 | 0.0520 | | No log | 23.0 | 161 | 0.2252 | 0.86 | 0.2946 | 1.1006 | 0.8600 | 0.8471 | 0.2830 | 0.0495 | | No log | 24.0 | 168 | 0.2325 | 0.85 | 0.2985 | 0.9069 | 0.85 | 0.8288 | 0.2681 | 0.0533 | | No log | 25.0 | 175 | 0.2335 | 0.825 | 0.3005 | 0.8930 | 0.825 | 0.8000 | 0.2640 | 0.0496 | | No log | 26.0 | 182 | 0.2309 | 0.845 | 0.2984 | 1.0007 | 0.845 | 0.8308 | 0.2573 | 0.0536 | | No log | 27.0 | 189 | 0.2265 | 0.835 | 0.3051 | 1.0092 | 0.835 | 0.8158 | 0.2626 | 0.0603 | | No log | 28.0 | 196 | 0.2192 | 0.83 | 0.2977 | 1.0186 | 0.83 | 0.8019 | 0.2516 | 0.0572 | | No log | 29.0 | 203 | 0.2276 | 0.83 | 0.3017 | 0.9407 | 0.83 | 0.8179 | 0.2553 | 0.0480 | | No log | 30.0 | 210 | 0.2131 | 0.84 | 0.2992 | 0.9232 | 0.8400 | 0.8195 | 0.2541 | 0.0546 | | No log | 31.0 | 217 | 0.2197 | 0.845 | 0.2998 | 0.9012 | 0.845 | 0.8301 | 0.2537 | 0.0569 | | No log | 32.0 | 224 | 0.2138 | 0.85 | 0.2972 | 0.9117 | 0.85 | 0.8349 | 0.2777 | 0.0551 | | No log | 33.0 | 231 | 0.2167 | 0.85 | 0.2969 | 1.0176 | 0.85 | 0.8390 | 0.2676 | 0.0535 | | No log | 34.0 | 238 | 0.2114 | 0.84 | 0.2959 | 0.8912 | 0.8400 | 0.8190 | 0.2512 | 0.0514 | | No log | 35.0 | 245 | 0.2145 | 0.845 | 0.2952 | 0.8960 | 0.845 | 0.8216 | 0.2638 | 0.0492 | | No log | 36.0 | 252 | 0.2146 | 0.845 | 0.2960 | 0.9093 | 0.845 | 0.8301 | 0.2841 | 0.0519 | | No log | 37.0 | 259 | 0.2157 | 0.845 | 0.2973 | 0.9043 | 0.845 | 0.8216 | 0.2614 | 0.0520 | | No log | 38.0 | 266 | 0.2116 | 0.84 | 0.2949 | 0.8871 | 0.8400 | 0.8190 | 0.2639 | 0.0512 | | No log | 39.0 | 273 | 0.2138 | 0.845 | 0.2963 | 0.9002 | 0.845 | 0.8301 | 0.2497 | 0.0512 | | No log | 40.0 | 280 | 0.2129 | 0.84 | 0.2960 | 0.9731 | 0.8400 | 0.8190 | 0.2500 | 0.0511 | | No log | 41.0 | 287 | 0.2139 | 0.845 | 0.2966 | 1.0111 | 0.845 | 0.8301 | 0.2750 | 0.0523 | | No log | 42.0 | 294 | 0.2134 | 0.84 | 0.2959 | 0.9515 | 0.8400 | 0.8190 | 0.2577 | 0.0506 | | No log | 43.0 | 301 | 0.2134 | 0.84 | 0.2972 | 0.9022 | 0.8400 | 0.8190 | 0.2538 | 0.0517 | | No log | 44.0 | 308 | 0.2131 | 0.84 | 0.2966 | 0.9569 | 0.8400 | 0.8190 | 0.2683 | 0.0519 | | No log | 45.0 | 315 | 0.2131 | 0.84 | 0.2965 | 0.8931 | 0.8400 | 0.8190 | 0.2504 | 0.0513 | | No log | 46.0 | 322 | 0.2119 | 0.84 | 0.2963 | 0.8998 | 0.8400 | 0.8190 | 0.2535 | 0.0513 | | No log | 47.0 | 329 | 0.2129 | 0.84 | 0.2973 | 0.9017 | 0.8400 | 0.8190 | 0.2527 | 0.0514 | | No log | 48.0 | 336 | 0.2130 | 0.84 | 0.2971 | 0.8947 | 0.8400 | 0.8190 | 0.2520 | 0.0510 | | No log | 49.0 | 343 | 0.2123 | 0.84 | 0.2972 | 0.9482 | 0.8400 | 0.8190 | 0.2583 | 0.0515 | | No log | 50.0 | 350 | 0.2124 | 0.84 | 0.2970 | 0.9083 | 0.8400 | 0.8190 | 0.2604 | 0.0513 | | No log | 51.0 | 357 | 0.2130 | 0.84 | 0.2974 | 0.8978 | 0.8400 | 0.8190 | 0.2446 | 0.0513 | | No log | 52.0 | 364 | 0.2127 | 0.84 | 0.2975 | 0.8932 | 0.8400 | 0.8190 | 0.2457 | 0.0513 | | No log | 53.0 | 371 | 0.2125 | 0.84 | 0.2972 | 0.8935 | 0.8400 | 0.8190 | 0.2508 | 0.0512 | | No log | 54.0 | 378 | 0.2130 | 0.84 | 0.2975 | 0.8989 | 0.8400 | 0.8190 | 0.2551 | 0.0513 | | No log | 55.0 | 385 | 0.2128 | 0.84 | 0.2972 | 0.8941 | 0.8400 | 0.8190 | 0.2448 | 0.0511 | | No log | 56.0 | 392 | 0.2128 | 0.84 | 0.2974 | 0.8944 | 0.8400 | 0.8190 | 0.2459 | 0.0515 | | No log | 57.0 | 399 | 0.2128 | 0.84 | 0.2973 | 0.8934 | 0.8400 | 0.8190 | 0.2517 | 0.0512 | | No log | 58.0 | 406 | 0.2130 | 0.84 | 0.2973 | 0.8936 | 0.8400 | 0.8190 | 0.2448 | 0.0513 | | No log | 59.0 | 413 | 0.2129 | 0.84 | 0.2973 | 0.8951 | 0.8400 | 0.8190 | 0.2383 | 0.0513 | | No log | 60.0 | 420 | 0.2128 | 0.84 | 0.2972 | 0.8921 | 0.8400 | 0.8190 | 0.2519 | 0.0512 | | No log | 61.0 | 427 | 0.2125 | 0.84 | 0.2974 | 0.8959 | 0.8400 | 0.8190 | 0.2518 | 0.0515 | | No log | 62.0 | 434 | 0.2128 | 0.84 | 0.2973 | 0.8937 | 0.8400 | 0.8190 | 0.2385 | 0.0513 | | No log | 63.0 | 441 | 0.2131 | 0.84 | 0.2974 | 0.8933 | 0.8400 | 0.8190 | 0.2551 | 0.0512 | | No log | 64.0 | 448 | 0.2129 | 0.84 | 0.2974 | 0.8930 | 0.8400 | 0.8190 | 0.2388 | 0.0512 | | No log | 65.0 | 455 | 0.2129 | 0.84 | 0.2973 | 0.8927 | 0.8400 | 0.8190 | 0.2447 | 0.0513 | | No log | 66.0 | 462 | 0.2129 | 0.84 | 0.2974 | 0.8930 | 0.8400 | 0.8190 | 0.2385 | 0.0513 | | No log | 67.0 | 469 | 0.2129 | 0.84 | 0.2974 | 0.8929 | 0.8400 | 0.8190 | 0.2458 | 0.0512 | | No log | 68.0 | 476 | 0.2130 | 0.84 | 0.2975 | 0.8930 | 0.8400 | 0.8190 | 0.2455 | 0.0512 | | No log | 69.0 | 483 | 0.2130 | 0.84 | 0.2973 | 0.8917 | 0.8400 | 0.8190 | 0.2459 | 0.0513 | | No log | 70.0 | 490 | 0.2129 | 0.84 | 0.2973 | 0.8913 | 0.8400 | 0.8190 | 0.2520 | 0.0513 | | No log | 71.0 | 497 | 0.2131 | 0.84 | 0.2974 | 0.8919 | 0.8400 | 0.8190 | 0.2519 | 0.0513 | | 0.1234 | 72.0 | 504 | 0.2130 | 0.84 | 0.2973 | 0.8917 | 0.8400 | 0.8190 | 0.2457 | 0.0511 | | 0.1234 | 73.0 | 511 | 0.2129 | 0.84 | 0.2974 | 0.8917 | 0.8400 | 0.8190 | 0.2455 | 0.0512 | | 0.1234 | 74.0 | 518 | 0.2129 | 0.84 | 0.2974 | 0.8913 | 0.8400 | 0.8190 | 0.2455 | 0.0512 | | 0.1234 | 75.0 | 525 | 0.2130 | 0.84 | 0.2973 | 0.8917 | 0.8400 | 0.8190 | 0.2519 | 0.0513 | | 0.1234 | 76.0 | 532 | 0.2129 | 0.84 | 0.2974 | 0.8921 | 0.8400 | 0.8190 | 0.2455 | 0.0512 | | 0.1234 | 77.0 | 539 | 0.2130 | 0.84 | 0.2973 | 0.8919 | 0.8400 | 0.8190 | 0.2455 | 0.0511 | | 0.1234 | 78.0 | 546 | 0.2130 | 0.84 | 0.2973 | 0.8924 | 0.8400 | 0.8190 | 0.2455 | 0.0511 | | 0.1234 | 79.0 | 553 | 0.2130 | 0.84 | 0.2974 | 0.8919 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | | 0.1234 | 80.0 | 560 | 0.2130 | 0.84 | 0.2973 | 0.8915 | 0.8400 | 0.8190 | 0.2515 | 0.0512 | | 0.1234 | 81.0 | 567 | 0.2130 | 0.84 | 0.2973 | 0.8915 | 0.8400 | 0.8190 | 0.2456 | 0.0511 | | 0.1234 | 82.0 | 574 | 0.2130 | 0.84 | 0.2974 | 0.8915 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | | 0.1234 | 83.0 | 581 | 0.2130 | 0.84 | 0.2973 | 0.8916 | 0.8400 | 0.8190 | 0.2516 | 0.0512 | | 0.1234 | 84.0 | 588 | 0.2130 | 0.84 | 0.2974 | 0.8920 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | | 0.1234 | 85.0 | 595 | 0.2130 | 0.84 | 0.2974 | 0.8915 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | | 0.1234 | 86.0 | 602 | 0.2130 | 0.84 | 0.2974 | 0.8917 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | | 0.1234 | 87.0 | 609 | 0.2130 | 0.84 | 0.2974 | 0.8913 | 0.8400 | 0.8190 | 0.2517 | 0.0512 | | 0.1234 | 88.0 | 616 | 0.2130 | 0.84 | 0.2973 | 0.8916 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | | 0.1234 | 89.0 | 623 | 0.2130 | 0.84 | 0.2974 | 0.8912 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | | 0.1234 | 90.0 | 630 | 0.2130 | 0.84 | 0.2973 | 0.8914 | 0.8400 | 0.8190 | 0.2517 | 0.0512 | | 0.1234 | 91.0 | 637 | 0.2131 | 0.84 | 0.2974 | 0.8915 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | | 0.1234 | 92.0 | 644 | 0.2130 | 0.84 | 0.2973 | 0.8912 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | | 0.1234 | 93.0 | 651 | 0.2130 | 0.84 | 0.2974 | 0.8915 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | | 0.1234 | 94.0 | 658 | 0.2130 | 0.84 | 0.2973 | 0.8913 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | | 0.1234 | 95.0 | 665 | 0.2130 | 0.84 | 0.2973 | 0.8913 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | | 0.1234 | 96.0 | 672 | 0.2131 | 0.84 | 0.2974 | 0.8915 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | | 0.1234 | 97.0 | 679 | 0.2131 | 0.84 | 0.2973 | 0.8914 | 0.8400 | 0.8190 | 0.2517 | 0.0512 | | 0.1234 | 98.0 | 686 | 0.2130 | 0.84 | 0.2974 | 0.8912 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | | 0.1234 | 99.0 | 693 | 0.2131 | 0.84 | 0.2974 | 0.8913 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | | 0.1234 | 100.0 | 700 | 0.2131 | 0.84 | 0.2974 | 0.8913 | 0.8400 | 0.8190 | 0.2456 | 0.0512 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
Umer1542/task-b-classification
Umer1542
2023-07-10T16:35:37Z
160
0
transformers
[ "transformers", "pytorch", "opt", "text-classification", "en", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2023-07-10T15:47:23Z
--- license: other language: - en metrics: - accuracy - f1 - recall pipeline_tag: text-classification ---
dashan1992/dsl2
dashan1992
2023-07-10T16:35:33Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-10T16:34:19Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
TheBloke/MPT-30B-Dolphin-v2-GGML
TheBloke
2023-07-10T16:32:10Z
0
9
null
[ "license:other", "region:us" ]
null
2023-07-10T15:13:07Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Manoj Preveen's MPT 30B Dolphin v2 GGML These files are MPT GGML format model files for [Manoj Preveen's MPT 30B Dolphin v2](https://huggingface.co/manojpreveen/mpt-30b-dolphin-v2). Please note that these GGMLs are **not compatible with llama.cpp, or currently with text-generation-webui**. Please see below for a list of tools that work with this GGML model. ## Repositories available * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/MPT-30B-Dolphin-v2-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/manojpreveen/mpt-30b-dolphin-v2) ## Prompt template: custom ``` <system>: You are a helpful assistant <human>: {prompt} <bot>: ``` <!-- compatibility_ggml start --> ## Compatibilty These files are **not** compatible with llama.cpp or text-generation-webui. They can be used with: * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful inference engine based on llama.cpp with full GPU acceleration and good UI. * [LM Studio](https://lmstudio.ai/), a fully featured local GUI for GGML inference on Windows and macOS. * [LoLLMs-WebUI](https://github.com/ParisNeo/LoLLMs-WebUI) a web UI which supports nearly every backend out there. Use ctransformers backend for support for this model. * [ctransformers](https://github.com/marella/ctransformers): for use in Python code, including LangChain support. * [rustformers' llm](https://github.com/rustformers/llm) * The example `mpt` binary provided with [ggml](https://github.com/ggerganov/ggml) As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!) ## Tutorial for using LoLLMs-WebUI: * [Video tutorial, by LoLLMs-WebUI's author **ParisNeo**](https://youtu.be/vBU1b5n0GMU) <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | mpt-30b-dolphin-v2.ggmlv1.q4_0.bin | q4_0 | 4 | 16.85 GB| 19.35 GB | 4-bit. | | mpt-30b-dolphin-v2.ggmlv1.q4_1.bin | q4_1 | 4 | 18.73 GB| 21.23 GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | mpt-30b-dolphin-v2.ggmlv1.q5_0.bin | q5_0 | 5 | 20.60 GB| 23.10 GB | 5-bit. Higher accuracy, higher resource usage and slower inference. | | mpt-30b-dolphin-v2.ggmlv1.q5_1.bin | q5_1 | 5 | 22.47 GB| 24.97 GB | 5-bit. Even higher accuracy, resource usage and slower inference. | | mpt-30b-dolphin-v2.ggmlv1.q8_0.bin | q8_0 | 8 | 31.83 GB| 34.33 GB | 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Manoj Preveen's MPT 30B Dolphin v2 **Base Model :** mosaicml/mpt-30b **Tool :** MosaicML's llm-foundry (https://github.com/mosaicml/llm-foundry) **Dataset :** Entire flan3m-GPT3.5 dataset. **Config yaml with Model Params :** https://huggingface.co/manojpreveen/mpt-30b-orca-v2/blob/main/mpt-30b_orca.yaml **Prompt Format :** ``` <system>: [system prompt] <human>: [question] <bot>: ```
banden/ppo-LunarLander-v2
banden
2023-07-10T16:23:12Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-10T16:22:51Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 250.46 +/- 41.26 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
aburnazy/opt125m_alpaca
aburnazy
2023-07-10T16:20:54Z
136
0
transformers
[ "transformers", "pytorch", "tensorboard", "opt", "text-generation", "generated_from_trainer", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-10T15:40:41Z
--- license: other tags: - generated_from_trainer model-index: - name: opt125m_alpaca results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt125m_alpaca This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Tokenizers 0.13.3
himanshubohraxxx/innovaccer
himanshubohraxxx
2023-07-10T16:12:43Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-10T16:12:43Z
--- license: creativeml-openrail-m ---
mitra-mir/setfit_model_labelfaithful_epochs2
mitra-mir
2023-07-10T15:54:42Z
7
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-08T13:16:11Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 22 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 44, "warmup_steps": 5, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
sianadouglas/ensembletest
sianadouglas
2023-07-10T15:48:14Z
0
0
null
[ "en", "license:other", "region:us" ]
null
2023-07-10T15:47:23Z
--- license: other language: - en ---
tyavika/LR1E4-BS16-Bert_CNN512LSTM256NoBid
tyavika
2023-07-10T15:31:42Z
77
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-09T20:06:29Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: LR1E4-BS16-Bert_CNN512LSTM256NoBid results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LR1E4-BS16-Bert_CNN512LSTM256NoBid This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.7267 | 1.0 | 3290 | 1.5092 | | 1.2394 | 2.0 | 6580 | 1.3933 | | 0.8348 | 3.0 | 9870 | 1.5591 | | 0.542 | 4.0 | 13160 | 1.6667 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
MnLgt/textual_inversion_muir_1_5
MnLgt
2023-07-10T15:31:36Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-10T14:16:45Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - jordandavis/textual_inversion_muir_1_5 These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.